Student Projects

Student Projects by Associated Institutes

ETH Zurich uses SiROP to publish and search scientific projects. For more information visit sirop.org.

Humanoid robot ladder climbing via learning from demonstrations

Robotic Systems Lab

This thesis will employ learning from demonstration to enable a humanoid robot to climb ladders. The student will gain hands-on experience with cutting-edge machine learning, sim2real pipelines, and humanoid robot hardware.

Keywords

Humanoids, learning from demonstrations, imitation learning, reinforcement learning, simulation, legged robotics, control, robot, sim2real

Labels

Master Thesis

Description

Humanoid robots are attracting significant interest for their potential to transform manufacturing and home services. At the same time, advances in learning from demonstrations have enabled human-like feats of dexterity and agility in robotics [1-8]. However, humanoids remain limited to relatively structured tasks and lack the robustness and speed needed for unstructured environments. Building on our prior success in enabling quadruped robots to climb ladders [9], this project aims to extend ladder-climbing capabilities to humanoid robots using modern learning from demonstration techniques to facilitate training and develop natural motions. Unlike previous attempts at humanoid ladder climbing, which were slow and sensitive to perturbations [10, 11], our goal is to develop a more agile and reliable system—unlocking critical real-world applications such as disaster response and industrial inspection.

[1] Peng, X.B., Abbeel, P., Levine, S. and Van de Panne, M., 2018. Deepmimic: Example-guided deep reinforcement learning of physics-based character skills. ACM Transactions On Graphics (TOG), 37(4), pp.1-14.

[2] Peng, X.B., Ma, Z., Abbeel, P., Levine, S. and Kanazawa, A., 2021. Amp: Adversarial motion priors for stylized physics-based character control. ACM Transactions on Graphics (ToG), 40(4), pp.1-20.

[3] Li, C., Vlastelica, M., Blaes, S., Frey, J., Grimminger, F. and Martius, G., 2023, March. Learning agile skills via adversarial imitation of rough partial demonstrations. In Conference on Robot Learning (pp. 342-352). PMLR.

[4] Li, C., Blaes, S., Kolev, P., Vlastelica, M., Frey, J. and Martius, G., 2023, May. Versatile skill control via self-supervised adversarial imitation of unlabeled mixed motions. In 2023 IEEE international conference on robotics and automation (ICRA) (pp. 2944-2950). IEEE.

[5] Peng, X.B., Guo, Y., Halper, L., Levine, S. and Fidler, S., 2022. Ase: Large-scale reusable adversarial skill embeddings for physically simulated characters. ACM Transactions On Graphics (TOG), 41(4), pp.1-17.

[6] Tessler, C., Kasten, Y., Guo, Y., Mannor, S., Chechik, G. and Peng, X.B., 2023, July. Calm: Conditional adversarial latent models for directable virtual characters. In ACM SIGGRAPH 2023 Conference Proceedings (pp. 1-9).

[7] Li, C., Stanger-Jones, E., Heim, S. and Kim, S., 2024. Fld: Fourier latent dynamics for structured motion representation and learning. arXiv preprint arXiv:2402.13820.

[8] Watanabe, R., Li, C. and Hutter, M., 2025. DFM: Deep Fourier Mimic for Expressive Dance Motion Learning. arXiv preprint arXiv:2502.10980.

[9] D. Vogel, R. Baines, J. Church, J. Lotzer, K. Werner, and M. Hutter. Robust ladder climbing with a quadrupedal robot. 2025 IEEE International Conference on Intelligent Robots and Systems (accepted; preprint available on arXiv)

[10] J.Vaillant, et al., “Multi-contact vertical ladder climbing with an HRP-2 humanoid,” Autonomous Robots, vol. 40, no. 3, pp. 561–580, Mar. 2016.

[11] H.Yoneda, et al.,“Vertical ladder climbing motion with posture control for multi-locomotion robot,” in 2008 IEEE/RSJ International Conference on Intelligent Robots and Systems. Nice: IEEE, Sep. 2008, pp. 3579–3584.

Work Packages

  • Literature research
  • Design the task simulation environment
  • Develop autonomous control policies for ladder climbing
  • Deploy the control policy on real hardware (hardware already provided)

Requirements

  • Excellent knowledge of physical simulation
  • Deep knowledge of coding in Python and C++
  • Ability to work independently and creatively
  • Experience with robot sim2real preferred
  • Experience with learning from demonstration algorithms

Contact Details

Robert Baines (rbaines@ethz.ch) Alex Hansson (ahansson@mavt.ethz.ch) Chenhao Li (chenhao.li@ai.ethz.ch)

Student(s) Name(s)

Project Report Abstract

More information

Open this project... 

Published since: 2025-07-03 , Earliest start: 2025-07-21 , Latest end: 2026-05-01

Organization Robotic Systems Lab

Hosts Li Chenhao , Baines Robert

Topics Information, Computing and Communication Sciences , Engineering and Technology

3D-Printed Hydrogel-Based Composites with Engineered Core–Shell Magnetoelectric Networks For Biomedical Applications.

Multiscale Robotics Lab

Magnetoelectric materials are highly promising in biomedicine due to their unique ability to couple magnetic and electric fields. This coupling allows for remote and precise control of various biological processes. For instance, in drug delivery, magnetoelectric nanoparticles can be directed to specific locations within the body using an external magnetic field, followed by electrical stimulation to trigger the release of therapeutic agents. Their responsiveness and multifunctionality make magnetoelectrics a versatile tool in advancing non-invasive medical treatments and targeted therapies. In this project we aim to improve the core-shell architecture of the magnetoelectric nanoparticles (ME NPs). Afterwards, a reliable protocol to create a homogenous and colloidally stable inks (i.e. mixture of the ME NPs and a hydrogel) will be established. The ink formulation will be tested within the custom-made 3D printer. Finally, multifunctional composites will be fabricated and tested for the brain tissue stimulation.

Keywords

Nanoparticle, Iron Oxide, Barium Titanate, Surface engineering, Ink formulation, Additive Manufacturing, Digital Light Processing, Brain Tissue, Wireless Stimulation

Labels

Semester Project , Bachelor Thesis , Master Thesis , ETH Zurich (ETHZ)

PLEASE LOG IN TO SEE DESCRIPTION

This project is set to limited visibility by its publisher. To see the project description you need to log in at SiROP. Please follow these instructions:

  • Click link "Open this project..." below.
  • Log in to SiROP using your university login or create an account to see the details.

If your affiliation is not created automatically, please follow these instructions: http://bit.ly/sirop-affiliate

More information

Open this project... 

Published since: 2025-07-02 , Earliest start: 2025-07-02 , Latest end: 2026-03-31

Applications limited to ETH Zurich

Organization Multiscale Robotics Lab

Hosts Pustovalov Vitaly

Topics Engineering and Technology , Chemistry , Biology

Learning Acrobatic Excavator Maneuvers

Robotic Systems Lab

Gravis Robotics is an ETH spinoff from the Robotic Systems Lab (RSL) working on the automation of heavy machinery (https://gravisrobotics.com/). In this project you will be working with the Gravis team to develop an algorithm that allows a 25-ton excavator to perform an acrobatics maneuver, the jump turn.

Labels

Semester Project , Master Thesis

PLEASE LOG IN TO SEE DESCRIPTION

This project is set to limited visibility by its publisher. To see the project description you need to log in at SiROP. Please follow these instructions:

  • Click link "Open this project..." below.
  • Log in to SiROP using your university login or create an account to see the details.

If your affiliation is not created automatically, please follow these instructions: http://bit.ly/sirop-affiliate

More information

Open this project... 

Published since: 2025-07-02 , Earliest start: 2025-08-01 , Latest end: 2025-12-01

Organization Robotic Systems Lab

Hosts Egli Pascal Arturo , Zhang Weixuan

Topics Engineering and Technology

Deep Learning of Residual Physics For Soft Robot Simulation

Soft Robotics Lab

Incorporating state-of-the-art deep learning approaches to augment conventional soft robotic simulations for a fast, accurate and useful simulation for real soft robots.

Keywords

Soft Robotics, Machine Learning, Physical Modeling, Simulation

Labels

Semester Project , Master Thesis

PLEASE LOG IN TO SEE DESCRIPTION

This project is set to limited visibility by its publisher. To see the project description you need to log in at SiROP. Please follow these instructions:

  • Click link "Open this project..." below.
  • Log in to SiROP using your university login or create an account to see the details.

If your affiliation is not created automatically, please follow these instructions: http://bit.ly/sirop-affiliate

More information

Open this project... 

Published since: 2025-07-01 , Earliest start: 2025-03-01 , Latest end: 2026-03-01

Organization Soft Robotics Lab

Hosts Michelis Mike , Katzschmann Robert, Prof. Dr.

Topics Information, Computing and Communication Sciences , Engineering and Technology

Bridging Human-Readable and Robot-Perceived Maps: CAD-SLAM Alignment and Refinement

Autonomous Systems Lab

This thesis proposes an industrial collaboration with 7Sense Robotics on enabling robots to take advantage of existing building models for their localization and navigation, by aligning the output of the robot's visual SLAM map to CAD models. This will be an exciting opportunity to push the state of the art in research and also in practical applied demonstrations on real robots.

Labels

Master Thesis , ETH Zurich (ETHZ)

Description

Background

Visual SLAM (Simultaneous Localization and Mapping) enables robots to navigate unknown environments by building 3D maps using visual data from onboard cameras. While these maps are optimized for robotic perception and localization, they often lack the structural clarity and semantic richness of CAD models that humans rely on for spatial understanding. Additionally, they often differ in scale, completeness, and accuracy due to sensor noise and perception limitation. These mismatches create a communication gap between autonomous systems and human operators.

Goals

To bridge this gap, this thesis explores the problem of registering maps generated by SLAM with CAD floor plans. By establishing the projection between the robot-centric and human-centric representations, the approach will improve human-robot interaction, environment interpretation, and integration of autonomous mobile robots into existing workflows that rely on structured 2D CAD maps. Additionally, the thesis will explore how such alignment can support refining SLAM maps using CAD priors.

Proposed Method

The alignment will be formulated as a non-linear projection problem between the two map representations [1], while preserving local and global consistency of given correspondences. The problem can be solved using optimization techniques. As an optional continuation, the accurate geometric and semantic information in the CAD structure can be utilized to correct the distortion in the SLAM map. The approach will be evaluated on a real robot. The exact method remains open to refinement and innovation. It’s encouraged to explore alternative methods.

Expected outcome

● A novel spatial registration algorithm to align SLAM-generated maps with architectural CAD models

● A quantitative and qualitative analysis of the proposed method.

Work Packages

Requirements

● Strong self-motivation and curiosity for solving challenging robotic problems

● Previous experience in the fields of optimization, computer vision and VSLAM

● Excellent programming skills in C++ and Python

● Experience with Linux, ROS, and typical development tools such as git are advantageous

● An excellent academic record is desirable, but may be compensated by expert knowledge in the areas mentioned above

Contact Details

If you are interested, please send your transcripts and CV to Zimeng Jiang (zimeng.jiang@sevensense.ch) and Roxane Merat (roxane.merat@sevensense.ch).

Student(s) Name(s)

Project Report Abstract

More information

Open this project... 

Published since: 2025-07-01

Organization Autonomous Systems Lab

Hosts Oleynikova Elena

Topics Information, Computing and Communication Sciences , Engineering and Technology

LiDAR-Visual-Inertial Odometry with a Unified Representation

Autonomous Systems Lab

Lidar-Visual-Inertial odometry approaches [1-3] aim to overcome the limitations of the individual sensing modalities by estimating a pose from heterogenous measurements. Lidar-inertial odometry often diverges in environments with degenerate geometric structures and visual-inertial odometry can diverge in environments with uniform texture. Many existing lidar-visual-inertial odometry approaches use independent lidar-inertial and visual-inertial pipelines [2-3] to compute odometry estimates that are combined in a joint optimisation to obtain a single pose estimate. These approaches are able to obtain a robust pose estimate in degenerate environments but often underperform lidar-inertial or visual-inertial methods in non-degenerate scenarios due to the complexity of maintaining and combining odometry estimates from multiple representations.

Keywords

Odometry, SLAM, Sensor Fusion

Labels

Semester Project , Master Thesis

Description

The goal of this project is to develop a lidar-visual-inertial odometry approach that integrates visual and lidar measurements into a single unified representation. The starting point, inspired by FAST-LIVO2 [1], will be to investigate methods for efficiently combining visual patches from camera images with a set of geometric primitives extracted from FAST-LIO2 [4], a lidar-inertial odometry pipeline. The performance of the resulting approach will be evaluated in comparison with existing lidar-visual-inertial odometry approaches.

References: [1] C. Zheng et al., “FAST-LIVO2: Fast, Direct LiDAR-Inertial-Visual Odometry,” IEEE Transactions on Robotics, 2024. [2] J. Lin and F. Zhang, “R3LIVE++: A Robust, Real-time, Radiance reconstruction package with a tightly-coupled LiDAR-Inertial-Visual state Estimator,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 2024. [3] T. Shan, B. Englot, C. Ratti, and D. Rus, “LVI-SAM: Tightly-coupled Lidar-Visual-Inertial Odometry via Smoothing and Mapping,” in IEEE International Conference on Robotics and Automation, 2021. [4] W. Xu, Y. Cai, D. He, J. Lin, and F. Zhang, “FAST-LIO2: Fast Direct LiDAR-inertial Odometry,” arXiv, 2021.

Work Packages

  • WP1: Literature review of work on lidar-visual-inertial odometry.
  • WP2: Develop a lidar-visual-inertial odometry approach with a single unified representation.
  • WP3: Evaluate the performance of the approach in comparison with existing work.

Requirements

Experience with C++ and ROS.

Contact Details

Please send CV and transcripts to Rowan Border (border.rowan@ucy.ac.cy) and Ruben Mascaro (rmascaro@ethz.ch).

Student(s) Name(s)

Project Report Abstract

More information

Open this project... 

Published since: 2025-07-01 , Earliest start: 2025-07-01 , Latest end: 2026-02-28

Applications limited to ETH Zurich

Organization Autonomous Systems Lab

Hosts Mascaro Rubén , Chli Margarita

Topics Information, Computing and Communication Sciences

Collision Avoidance - Master Thesis at Avientus

Autonomous Systems Lab

Avientus is a startup that specializes in developing cutting-edge, heavy-duty automated drone transportation systems designed to revolutionize logistics and industrial applications. To further enhance the safety and reliability of their drones, we are offering a Master Thesis opportunity in the field of collision avoidance for drones.

Keywords

Collision avoidance, Computer vision, Drones

Labels

Master Thesis

Description

The objective of the thesis is to develop an algorithm capable of real-time collision avoidance for drones, ensuring safe operation even in environments with other air traffic, such as airplanes, helicopters, paragliders, and similar obstacles. The tasks include selecting and evaluating existing neural networks and algorithms, or designing a custom solution tailored to our requirements. The project also involves working with a high-performance hardware setup, with support for real-time execution on an NVIDIA Jetson Orin.

Avientus provides a flexible and dynamic research environment with a budget for necessary hardware, such as cameras and sensors. Students are encouraged to experiment with and choose tools, hardware, and software frameworks that best suit their approach. This thesis offers a unique opportunity to work on cutting-edge technology in a startup environment, contributing directly to the future of automated drone logistics.

If you are passionate about robotics, computer vision, and real-time systems, and are excited to tackle challenges in the aviation domain, we’d love to hear from you!

Work Packages

Requirements

  • Strong competencies in computer vision & neural networks
  • Good knowledge in programming (Python and / or C++)
  • General interest in VTOL drones

Contact Details

Please send your CV and transcripts to info@avientus.ch and rmascaro@ethz.ch

Student(s) Name(s)

Project Report Abstract

More information

Open this project... 

Published since: 2025-07-01 , Earliest start: 2025-08-01 , Latest end: 2026-02-28

Applications limited to ETH Zurich

Organization Autonomous Systems Lab

Hosts Mascaro Rubén , Chli Margarita

Topics Information, Computing and Communication Sciences

Visual Object Reconstruction and Generalized Collision-aware Manipulation using Reinforcement Learning

Robotic Systems Lab

We propose to develop a neural motion planning method that incorporates 3D-reconstructed geometry of in-hand objects into a reinforcement learning policy for collision-aware manipulation. This enables generalization across novel tools and objects with precise, geometry-driven motion in both stationary and mobile robotic setups.

Keywords

Neural motion planning, 3D reconstruction, reinforcement learning, mobile manipulation

Labels

Master Thesis , ETH Zurich (ETHZ)

Description

Neural motion planners [1, 2] distill expert trajectories from classical motion planners [3, 4] into fast, reactive, and generalizable policies for robotic arms. However, existing neural planners do not account for the presence of in-hand objects, leading to significant limitations including (a) the influence of additional gravitational forces and (b) increased complexity in collision geometry. This project extends such methods to handle novel objects and tools with unknown geometry. Using a camera mounted on the robot's end-effector, the system first performs 3D reconstruction of the target object [5]. The resulting shape representation is fused into the state space of a reinforcement learning policy trained to reach and manipulate objects while avoiding self-collisions and obstacles in the environment. Unlike prior work [6], which uses conservative bounding box representations, our method leverages precise reconstructed geometry for fine-grained, tool- and object-aware collision avoidance. The goal is to learn a single policy that generalizes across different tools and objects, adapting its behavior based on their geometry. The policy will be validated both in simulation and on hardware, first with a stationary arm and then on a mobile manipulator combining a legged base and a robotic arm.

[1] A. Fishman, A. Murali, C. Eppner, B. Peele, B. Boots, and D. Fox, “Motion policy networks,” in Conference on Robot Learning. PMLR, 2023, pp. 967–977 [2] Murtaza Dalal, et al. “Neural MP: A Generalist Neural Motion Planner”, 2024 [3] Ratliff N, et al.. “CHOMP: gradient optimization techniques for efficient motion planning”, 2009, IEEE International Conference on Robotics and Automation [4] M.P. Strub et al., “Adaptively informed trees (ait*): Fast asymptotically optimal path planning through adaptive heuristics,” in 2020 IEEE ICRA [5] Hong, Y. et al.. See-Then-Grasp: Object Full 3D Reconstruction via Two-Stage Active Robotic Reconstruction Using Single Manipulator. Appl. Sci. 2025 [6] J. Lee et al., "Learning Fast, Tool-Aware Collision Avoidance for Collaborative Robots," in IEEE Robotics and Automation Letters, vol. 10, no. 8, pp. 7731-7738, Aug. 2025

Work Packages

1.Literature review on visual object reconstruction and shape representations (e.g. NeRF, Gaussian Splatting, SDFs). 2.Train an RL policy that uses both robot state and reconstructed object geometry to perform collision-aware manipulation. 3.Evaluate the policy on unseen tools and objects, testing generalization and robustness. 4.Deploy the pipeline on a physical arm and mobile manipulator.

Requirements

1.Good knowledge of Python 2.Knowledge of Reinforcement Learning and Isaac Sim (Prefered) 3.Experience with robotic hardware
4.Understanding of object reconstruction methods

Contact Details

Student(s) Name(s)

Project Report Abstract

More information

Open this project... 

Published since: 2025-06-30 , Earliest start: 2025-08-31 , Latest end: 2026-07-31

Organization Robotic Systems Lab

Hosts Portela Tifanny , Zurbrügg René

Topics Information, Computing and Communication Sciences

Activity and fatigue detection using machine learning based on real-world data from smart clothing

Biomedical and Mobile Health Technology Lab

The aim of this project is to use machine learning methods to extract useful information such as activity type and fatigue level from real-world data acquired from our textile-based wearable technology during sport activities.

Keywords

smart clothing, wearable technology, textile sensor, fitness tracking, sports medicine, fatigue, machine learning, artificial intelligence, computer science

Labels

Semester Project , Bachelor Thesis , Master Thesis

Description

Sport monitoring has many benefits including injury prevention and performance optimization. Current methods for activity monitoring in sports mostly use camera-based motion tracking or the use of inertial measurement unit-based systems that are limited in their measurement space or are obtrusive to the activities of the user. Smart clothing offers a solution that makes monitoring of biomechanics during individual and team sports possible in different conditions.

We have made textile-based wearable technology for unobtrusive monitoring of movement and completed a study in which we acquired real-world data from our technology during sport activities. Using machine learning methods, this project aims to extract useful information from our data such as activity type and fatigue. This project will offer valuable experience in data processing, machine learning, working with real-world data, and cutting-edge wearable technologies.

Your Profile

  • Background in electrical engineering, computer engineering, mechanical engineering, or related fields

  • Prior experience with data analysis (signal processing, machine learning algorithms, or similar) and machine learning

  • Independent worker with critical thinking skills and problem solving skills

Goal

  • Investigate methods to classify different sport-specific activities or predict biomechanical measures based on real-world data.
  • Develop and test models to assess the level of fatigue during sports.
  • Improve the model and compare it with other measurement methods.

Contact Details

Prof Dr Carlo Menon and Dr Chakaveh Ahmadizadeh will supervise the student and the research will be performed at the Biomedical and Mobile Health Technology lab (www.bmht.ethz.ch) at ETH Zurich, Switzerland.

To apply, use the button below to tell us why you want to do this project ("motivation") along with the type of project you are applying for (e.g., master project or thesis) and your timeline for the project; attach a mini CV with your current program of study, your grades and any other info you deem relevant--maybe the name and phone number of a postdoc or a professor willing to be your reference; and make any further comments ("additional remarks").

More information

Open this project... 

Published since: 2025-06-30 , Earliest start: 2025-07-01 , Latest end: 2026-03-31

Organization Biomedical and Mobile Health Technology Lab

Hosts Ahmadizadeh Chakaveh

Topics Information, Computing and Communication Sciences , Engineering and Technology

Design data acquisition solution for smart clothing

Biomedical and Mobile Health Technology Lab

The aim of this project is to develop and improve wearable electronics solutions for data acquisition from textile-based sensors used in our smart clothing.

Keywords

smart clothing, wearable technology, textile sensor, fitness tracking, sports medicine, PCB, electronics, computer science

Labels

Semester Project , Bachelor Thesis , Master Thesis

Description

Movement monitoring is a rapidly growing field with many applications in healthcare and fitness. Current methods for activity monitoring mostly use camera-based motion tracking or the inertial measurement unit-based systems that are limited in their measurement space or are obtrusive to the activities of the user. Smart clothing offers a solution that makes monitoring of biomechanics during individual and team sports possible in different conditions.

We develop textile-based wearable technologies for unobtrusive monitoring of movement based on capacitive, resistive, or inductive sensors. Signals from these sensors are often acquired using benchtop LCR meters. However, it is crucial for our wearable applications to replace such solutions with portable and unobtrusive data acquisition methods. This project aims to design, develop, and test portable data acquisition systems for data recording from multiple capacitive sensors. This project will offer valuable experience in electronics, prototyping, embedded software, and cutting-edge wearable technologies.

Your Profile

  • Background in electrical engineering, computer engineering, mechanical engineering, or related fields

  • Prior experience with electronics and prototyping (printed circuit board, electronic circuit testing and troubleshooting, or similar)

  • Independent worker with critical thinking skills and problem solving skills

Goal

  • Identify areas for improvement in current data acquisition solutions.
  • Develop and test electronics and software for signal recording from capacitive sensors.
  • Implement developed solutions into textile-based wearable technologies for movement monitoring.

Contact Details

Prof Dr Carlo Menon and Dr Chakaveh Ahmadizadeh will supervise the student and the research will be performed at the Biomedical and Mobile Health Technology lab (www.bmht.ethz.ch) at ETH Zurich, Switzerland.

To apply, use the button below to tell us why you want to do this project ("motivation") along with the type of project you are applying for (e.g., master project or thesis) and your timeline for the project; attach a mini CV with your current program of study, your grades and any other info you deem relevant--maybe the name and phone number of a postdoc or a professor willing to be your reference; and make any further comments ("additional remarks").

More information

Open this project... 

Published since: 2025-06-30 , Earliest start: 2025-07-01 , Latest end: 2025-12-31

Organization Biomedical and Mobile Health Technology Lab

Hosts Ahmadizadeh Chakaveh

Topics Information, Computing and Communication Sciences , Engineering and Technology

AI-Based Estimation of Blood Pressure via Pulse Transit Time and Vascular Dynamics from piezoelectric sensors

Biomedical and Mobile Health Technology Lab

This project explores the development and validation of an AI based system for non-invasive blood pressure estimation. The method focuses on deriving pulse transit time (PTT) using dual A-mode piezoelectric sensors (ultrasound) and characterizing vascular features such as arterial wall diameter and flow velocity. The work contributes toward a future wearable ultrasound-based solution for continuous cardiovascular monitoring.

Keywords

Keywords: machine learning, deep learning, artificial intelligence, blood pressure, pulse transit time, vascular imaging, Doppler

Labels

Semester Project , Internship , Master Thesis

Description

In this project, we will characterize and evaluate the use of piezoelectric sensors to monitor blood pressure using AI approaches. Data will be acquired using two sensors placed along a major artery, which will detect the pulse wave. From this, key physiological features will be extracted such as arterial wall displacement, diameter variation, and blood flow velocity, and pulse transit time which may be used in computational models of blood pressure estimation (e.g., Bramwell–Hill model). Machine learning and deep learning approaches will be used for blood pressure estimation and compared to established computational models. This project will offer valuable experience in data processing, machine learning, and working with real-world health data.

Goal

Goals • Implement physiological feature extraction algorithms • Implement mathematical, ML and DL models for blood pressure estimation

Tasks • Literature review (20%) • Data processing and key physiological feature extraction (20%) • Implement and calibrate mathematical models for blood pressure estimation (10%) • Implement ML and DL models for blood pressure estimation (30%) • Test and evaluation of models and comparison with gold standard (10%) • Report and presentation (10%)

Contact Details

Prof Dr Carlo Menon, Dr. Corin Otesteanu (corin.otesteanu@hest.ethz.ch) will supervise the student and the research will be performed at the Biomedical and Mobile Health Technology lab (www.bmht.ethz.ch) at ETH Zurich, Switzerland.

To apply, use the button below to tell us why you want to do this project ("motivation") along with the type of project you are applying for (e.g., master project or thesis) and your timeline for the project; attach a mini CV with your current program of study, your grades and any other info you deem relevant--maybe the name and phone number of a postdoc or a professor willing to be your reference; and make any further comments ("additional remarks").

More information

Open this project... 

Published since: 2025-06-28 , Earliest start: 2025-07-01 , Latest end: 2026-07-31

Organization Biomedical and Mobile Health Technology Lab

Hosts Otesteanu Corin, Dr

Topics Information, Computing and Communication Sciences , Engineering and Technology

AI-Based Motion Estimation and Fatigue Monitoring Using Triboelectric Nanogenerators

Biomedical and Mobile Health Technology Lab

This project explores the feasibility of using triboelectric nanogenerators (TENGs) for joint angle analysis and fatigue monitoring during repetitive human movements using machine learning and deep learning.

Keywords

machine learning, artificial intelligence, generative AI, triboelectric nanogenerator, joint angle estimation, motion analysis, fatigue detection

Labels

Semester Project , Internship , Master Thesis

Description

Wearable triboelectric nanogenerators (TENGs) are emerging as energy-harvesting and self-powered sensors that can transduce biomechanical motion into electrical signals. In this project, we will characterize and evaluate TENGs for real-time joint motion estimation and fatigue analysis. Controlled experiments will be conducted during repetitive tasks using an optical motion capture system or inertial measurement unit (IMU) as gold standard. The dataset will consist of TENG voltage outputs synchronized with 3D kinematic data from the reference systems. This project will offer valuable experience in data processing, machine learning, and working with real-world health data.

Goal

Goals • Characterize the TENG sensor signal for different joint movements • Implement ML and DL models for joint angle reconstruction • Benchmarking shallow, deep, and transformers based models for performance

Tasks • Literature review (10%) • Characterize the TENG sensor signal in controlled settings (10%) • Implement ML and DL models for angle and angular velocity reconstruction (20%) • Data collection from 10-15 participants during a standardized protocol (30%) • Test and evaluation of models and comparison with gold standard (20%) • Report and presentation (10%)

Contact Details

Prof Dr Carlo Menon, Dr. Corin Otesteanu (corin.otesteanu@hest.ethz.ch) and Dr. Satyiaranjan Bairagi (satyaranjan.bairagi@hest.ethz.ch) will jointly supervise the student and the research will be performed at the Biomedical and Mobile Health Technology lab (www.bmht.ethz.ch) at ETH Zurich, Switzerland.

To apply, use the button below to tell us why you want to do this project ("motivation") along with the type of project you are applying for (e.g., master project or thesis) and your timeline for the project; attach a mini CV with your current program of study, your grades and any other info you deem relevant--maybe the name and phone number of a postdoc or a professor willing to be your reference; and make any further comments ("additional remarks").

More information

Open this project... 

Published since: 2025-06-28 , Earliest start: 2025-08-01 , Latest end: 2026-08-01

Organization Biomedical and Mobile Health Technology Lab

Hosts Otesteanu Corin, Dr

Topics Information, Computing and Communication Sciences , Engineering and Technology

Artificial intelligence for anxiety level classification using data from wearable devices

Biomedical and Mobile Health Technology Lab

The aim of this project is to study the feasibility of using wearable devices for anxiety detection using machine learning models. By creating a robust framework for continuous monitoring and early assessment, it has the potential to meaningfully impact the users wellbeing.

Keywords

Keywords: wearable technology, anxiety monitoring, health tracking, machine learning, artificial intelligence

Labels

Semester Project , Internship , Master Thesis

Description

The project aims to evaluate the effectiveness of wearable technology in identifying anxiety in individuals. This research makes use of publicly available datasets of individuals watching anxiety inducing videos . Data gathered from these devices include continuous measurements over 1 hour of respiration, ECG and electrodermal activity data. By developing machine learning methods, the aim is to continuously detect anxiety levels in an individualized manner. This project will offer valuable experience in data processing, machine learning, and working with real-world health data. Moreover, it has the potential to meaningfully impact the users wellbeing.

Goal

Goals • Quantitatively analyze the effect of anxiety inducing videos on several biomarkers extracted from wearable data • Investigate different machine learning models to classify anxiety levels (low, normal, high) from wearable device data. Tasks • Literature review (10%) • Data analysis (loading data, data filtering) (30%) • Design and implement shallow machine learning model for anxiety level classification (20%) • Design and implement a recurrent neural network for anxiety level classification (20%) • Test and evaluation of models (10%) • Report and presentation (10%)

Contact Details

Prof Dr Carlo Menon and Dr. Corin Otesteanu (corin.otesteanu@hest.ethz.ch) will supervise the student and the research will be performed at the Biomedical and Mobile Health Technology lab (www.bmht.ethz.ch) at ETH Zurich, Switzerland.

To apply, use the button below to tell us why you want to do this project ("motivation") along with the type of project you are applying for (e.g., master project or thesis) and your timeline for the project; attach a mini CV with your current program of study, your grades and any other info you deem relevant--maybe the name and phone number of a postdoc or a professor willing to be your reference; and make any further comments ("additional remarks")

More information

Open this project... 

Published since: 2025-06-28 , Earliest start: 2025-08-01 , Latest end: 2026-07-01

Organization Biomedical and Mobile Health Technology Lab

Hosts Otesteanu Corin, Dr

Topics Medical and Health Sciences , Information, Computing and Communication Sciences , Engineering and Technology

X-Ray based registration of vessel models

Multiscale Robotics Lab

We are looking for a motivated Master student for the ANGIE project, , which envisions a future where targeted drug delivery is made possible through magnetically guided capsules. To enable localisation the student will develop an algorithm to estimate the 6 DoF pose of a vascular model using a 2D low resolution X-ray image.

Keywords

Computer vision, medical imaging, pose estimation

Labels

Master Thesis

Description

Goal

6 DoF pose estimation of a vessel model using 2D X-ray image

Contact Details

Derick Sivakumaran, sderick@ethz.ch

More information

Open this project... 

Published since: 2025-06-27 , Earliest start: 2025-07-13 , Latest end: 2026-03-31

Organization Multiscale Robotics Lab

Hosts Sivakumaran Derick

Topics Information, Computing and Communication Sciences , Engineering and Technology

Volumetric Bioprinting of Engineered Muscle Tissues

Soft Robotics Lab

We are working with an innovative volumetric printing technique – Xolography – to fabricate engineered muscle tissues that function as bioactuators for biohybrid systems. You will work at the interface between biology and robotics, helping us exploring new designs and strategies to advance the field of muscle tissue engineering and muscle-powered living machines.

Keywords

bioprinting, muscle, tissue engineering, 3D cell culture, hydrogels, biohybrid robotics, regenerative medicine, 3D models, biomaterials, biofabrication.

Labels

Semester Project , Bachelor Thesis , Master Thesis , ETH Zurich (ETHZ)

Description

Volumetric bioprinting is a powerful technology for tissue engineering that enables the fabrication of cell-laden constructs with exceptional speed, resolution, and geometrical freedom.

We offer student projects that can be tailored to fit your specific interests, ranging from 3D bioprinting to the development and characterization of testing platforms for engineered muscles. These research projects aim to combine expertise in biomaterials, 3D printing, and cell culture techniques with the final goal of generating functional living actuators.

Goal

Possible project goals:

  • Optimize xolographic printing with living and synthetic materials.
  • Design and print different bioactuator geometries.
  • Culture and characterize engineered muscle constructs.
  • Optimize/develop platforms to assess the bioactuator's performance.

Contact Details

More information

Open this project... 

Published since: 2025-06-27 , Earliest start: 2025-07-15

Organization Soft Robotics Lab

Hosts Badolato Asia

Topics Medical and Health Sciences , Engineering and Technology , Chemistry , Biology

Development of Wireless Ion Sensing Platforms using Metamaterials and Soft Biointerfaces

Biomedical and Mobile Health Technology Lab

This project explores the design and realization of a flexible, wireless ion-sensing patch by integrating resonant metamaterial structures with bio-interfacing soft materials. The system is intended for noninvasive detection of physiologically relevant ions from skin-interfaced fluids using passive sensing mechanisms

Keywords

flexible electronics, metamaterials, wireless biosensors, resonant sensors, skin-compatible interfaces

Labels

Semester Project , Bachelor Thesis , Master Thesis

Description

Wearable biosensing platforms are increasingly explored for continuous, noninvasive health monitoring. This project focuses on developing a wireless sensor system based on resonant metamaterials combined with soft, ion-selective interfaces that can extract biomarkers from skin-interfaced fluids. By leveraging changes in electromagnetic properties, the sensor can detect specific physiological signals without requiring batteries or complex electronics. Students will investigate flexible materials and resonator designs, gaining interdisciplinary experience across electronics, materials science, and biomedical sensing

Goal

• Experimental work on hydrogel synthesis and functionalization • Fabricate and integration on flexible substrate • Wireless measurement, calibration, and performance evaluation • Write a scientific project report

Contact Details

Prof Dr Carlo Menon, Dr. Muhammad Zada and Dr. Weifeng Yang will supervise the student, and the research will be performed at ETH Zurich’s Biomedical and Mobile Health Technology research group (www.bmht.ethz.ch) in the Balgrist Campus in Zurich, Switzerland.

More information

Open this project... 

Published since: 2025-06-26 , Earliest start: 2025-08-01 , Latest end: 2025-12-31

Organization Biomedical and Mobile Health Technology Lab

Hosts Zada Muhammad

Topics Engineering and Technology

3D reconstruction with open-vocabulary reconstruction priors

Robotic Systems Lab

Push the limits of arbitrary online video reconstruction by combining the most recent, prior-supported real-time Simultaneous Localization And Mapping (SLAM) methods with automatic backend regularization techniques.

Keywords

Structure from Motion Visual SLAM

Labels

Master Thesis , ETH Zurich (ETHZ)

Description

In recent years, the advent of learning-based methods has led to interesting novel structure from motion modules that provide end-to-end 3D structure, camera pose, and camera intrinsics estimation from arbitrary image sets [1,2]. While it is reasonable to assume that these representation implicitly impose geometric priors onto regularly encountered scene elements (e.g. chairs, tables, etc.), the imposition of these priors gets lost during the typical global back-end optimization step [3,4].

The imposition of geometric priors in back-end optimization is a topic with long-standing history, and nowadays encompasses both classical techniques such as piece-wise planarity or Manhattan world assumptions, as well as modern learning-based representations such as neural shape priors [5]. While certainly interesting, these approaches suffer from two important limitations:

• Classical approaches are limited to a small set of pre-defined regularities that may be encountered in a certain class of environments (typically man-made). It is furthermore difficult to decide without further cues which parts of an environment should be regularized. • More modern approaches such as deep shape priors are typically trained on specific classes of objects, only, and are thus limited to certain object-level application.

The goal of the present thesis is to investigate the following topics:

• Flexible discovery of regularities (surface regularities, repetitive scene elements) in large scale outdoor reconstruction scenarios using LLM/VLMs • Extraction of geometric surface properties from grounded open-set features • Automatic formulation of regularization priors using LLMs • Inclusion into back-end reconstruction for improved global reconstruction results

The proposed thesis will be conducted at the Robotics and AI (RAI) Institute, a new top-notch partner institute of Boston Dynamics pushing the boundaries of control and perception in robotics. For an example of the recent achievements of the institute, please consider our online video channel: Stunting with Reinforcement Learning

The down-stream task of the present project will be the creation of geometric models with high fidelity surfaces for inclusion into simulation environments.

Selection will be highly selective. Potential candidates are invited to submit their CV and grade sheet, after which students will be invited to an on-site interview.

References [1] Grounding image matching in 3D with Mast3r, Vincent Leroy, Yohann Cabon, and Jerome Revaud [2] VGGT: Visual Geoemtry Grounded Transformer, Wang, Chen, Karaev, Vedaldi, Rupprecht, Novotny [3] MASt3R-SLAM: Real-Time Dense SLAM with 3D Reconstruction Priors, Murai, Dexheimer, Davison [4] VGGT-SLAM: Dense RGB SLAM Optimized on the SL(4) Manifold, Dominic Maggio, Hyungtae Lim, Luca Carlone [5] DeepSDF: Learning Continuous Signed Distance Functions for Shape Representation, Park, Florence, Straub, Newcombe, Lovegrove

Work Packages

• Literature research • Familiarization with modern 3D reconstruction pipeline • Exploration of LLM/VLM for automatic geometric regularity detection • Implementation and inclusion into back-end optimization • Testing and Validation

Requirements

• Excellent knowledge of Python and C++ • Knowledge in Computer vision • Experience in SLAM/reconstruction • Experience in applying learning-based representations • Interest in recent LLM/VLM architectures

Contact Details

Student(s) Name(s)

Project Report Abstract

More information

Open this project... 

Published since: 2025-06-26 , Earliest start: 2025-07-01

Organization Robotic Systems Lab

Hosts Kneip Laurent

Topics Engineering and Technology

Learning Manipulation beyond Single End-Effector

Robotic Systems Lab

Robots, like humans, should be able to use different parts of their morphology (base, elbow, hips, feet) for interaction. This project focuses on learning multi-modal interactions from demonstrations for mobile manipulators.

Keywords

machine learning, manipulation, robotics

Labels

Semester Project , Master Thesis

Description

Most manipulation research focuses on using only the gripper mounted on an arm for interaction. However, robots, like humans, should be able to use different parts of their morphology (base, elbow, hips, feet) for interaction. A common challenge in learning these multi-modal behaviors is mode collapse due to limited expressivity in the policy and/or learning algorithm.

Recently, the conditional denoising diffusion process has shown powerful generative modeling capabilities for image (DALL-E) and behavior generation. The goal of this project is to develop a deeper understanding of these models and investigate their ability to learn interactions with multiple end-effectors on the robot [1]. Additionally, we want to improve upon the vanilla diffusion policy [2] and study the model's generalizability to novel situations.

References:

  • Sleiman, Jean-Pierre, et al. "Versatile multi-contact planning and control for legged loco-manipulation." Science Robotics (2023).
  • Chi, Cheng, et al. "Diffusion policy: Visuomotor policy learning via action diffusion." arXiv:2303.04137 (2023).

Work Packages

  • Literature research on diffusion policies
  • Designing simulation environment in Orbit and generating data
  • Training and improving diffusion policy architecture for multi-modality in interaction data
  • Performing ablations and comparisons to alternate methods

Requirements

  • Highly motivated and autonomous student
  • Experience with machine learning and controls
  • Excellent knowledge of Python and PyTorch
  • Experience with simulators and robot hardware is a bonus

Contact Details

Please send your CV and transcript to the following:

Student(s) Name(s)

Project Report Abstract

More information

Open this project... 

Published since: 2025-06-25 , Earliest start: 2024-01-08

Organization Robotic Systems Lab

Hosts Mittal Mayank

Topics Information, Computing and Communication Sciences , Behavioural and Cognitive Sciences

Wearable 2D Capacitive Auxetic Structures for Motion Monitoring

Biomedical and Mobile Health Technology Lab

The aim of this project is to develop a single sensor capable of measuring both unidirectional strain and bending angle.

Keywords

wearable, flexible electronics, 3D printing, capacitive strain sensors

Labels

Semester Project , Bachelor Thesis , Master Thesis

Description

Innovations using metamaterials have been made in recent years, offering promising approaches for creating stretchable and adaptable structures in wearable electronics. Innovative ways of integrating wearables into our daily life and maximizing the functionality of such structures require new fabrication methods that are both cost-effective and versatile. In the present project, a new approach utilizing 3D printed molds to create stretchable auxetic structures is being developed. Soft substrates will be patterned to form auxetic structures that can be integrated into textiles. Simultaneously, the sensing elements will be embedded into the soft substrate using an insert molding technique. The resulting sensors are designed for motion monitoring, making them suitable for integration into garments and other wearable platforms. The ability of these sensors to maintain performance under mechanical deformation makes them particularly advantageous for wearable health monitoring systems.

Goal

  • Develop a method for the production of capacitive auxetic structures
  • Fabricate functional wearable sensors
  • Write a project report

Tasks

  • Literature review (10%)
  • Optimization of the fabrication of capacitive sensors (50%)
  • Characterization of the produced sensors (30%)
  • Data collection, reporting and presentation (10%)

Your profile

  • Background in Applied Physics, Mechanical Engineering, Materials Science, Electronics Engineering, Biomedical Engineering, Health Science or related fields is desirable but not mandatory.
  • Independent worker with critical thinking and problem-solving skills

Contact Details

Prof. Dr. Carlo Menon and Pierre Kateb will supervise the student and the research will be performed at ETH Zurich’s Biomedical and Mobile Health Technology research group (www.bmht.ethz.ch) in the Balgrist Campus in Zurich, Switzerland.

To apply, use the button below to tell us why you want to do this project ("motivation"); attach a CV with your current program of study, your grades and any other info you deem relevant.

More information

Open this project... 

Published since: 2025-06-24 , Earliest start: 2025-08-01 , Latest end: 2026-12-31

Organization Biomedical and Mobile Health Technology Lab

Hosts Kateb Pierre

Topics Engineering and Technology

CI/CD Automation and Testing for Embedded Systems

Rehabilitation Engineering Lab

Design and implement a robust CI/CD pipeline for our embedded software development.

Keywords

firmware, embedded, github, cicd, workflow

Labels

Internship , Lab Practice

Description

Join us at Skaaltec to design and implement a robust CI/CD pipeline for our embedded software development. In this project, you will leverage GitHub workflows to automate building, testing, and deployment of firmware. The focus will be on improving software quality and speeding up development cycles in a high-reliability embedded environment.

You will be responsible for:

  • Setting up automated code checks and unit tests (e.g., triggered on pull requests)
  • Ensuring that the codebase builds without errors or warnings across configurations
  • Integrating a testing framework for embedded unit testing
  • Developing a secure firmware build and release pipeline, including signing and deployment steps

This is a great opportunity to work on modern software development infrastructure, applied to embedded systems with real-world constraints.

Requirements

  • Familiarity with GitHub Actions and CI/CD pipelines
  • Experience in embedded C programming, debugging, and cross-compilation
  • Comfort working in a Unix-based development environment

Preferred Experience

  • Zephyr RTOS and the ZTEST framework
  • Basic knowledge of firmware signing and secure boot concepts

Goal

Tasks

Your Profile

Electrical/Software Engineer, Computer Scientist or proved equivalent experience

Contact Details

More information

Open this project... 

Published since: 2025-06-23

Organization Rehabilitation Engineering Lab

Hosts Viskaitis Paulius

Topics Information, Computing and Communication Sciences

RA position: Combining Human And Robot Data To Train Manipulation Policies

Computer Vision and Geometry Group

Data collection by humans for training robotic policies is traditionally a strenuous task. It requires the recorder to teleoperate the robot, commonly leading to a slow task execution due to the indirect and unfamiliar control mechanism, and the need to record task demonstrations multiple times due to control imperfections. The work EgoMimic [1] introduces a framework intelligently combining human-only task demonstrations without robotic hardware with traditional human-robot ones, greatly reducing the required recording time due to the high proportion of quick-to-record human-only demos. In this project, four top research labs (from Georgia Tech, Stanford, UCSD, and ETH) across the world join efforts to test various challenging hypotheses, such as scaling laws in robotic learning, cross-embodiment generalisation, etc.

Keywords

egocentric vision, robotics, robotic manipulation

Labels

Student Assistant / HiWi , ETH Zurich (ETHZ)

PLEASE LOG IN TO SEE DESCRIPTION

This project is set to limited visibility by its publisher. To see the project description you need to log in at SiROP. Please follow these instructions:

  • Click link "Open this project..." below.
  • Log in to SiROP using your university login or create an account to see the details.

If your affiliation is not created automatically, please follow these instructions: http://bit.ly/sirop-affiliate

More information

Open this project... 

Published since: 2025-06-23 , Earliest start: 2025-06-20

Organization Computer Vision and Geometry Group

Hosts Zurbrügg René , Zurbrügg René , Wang Xi , Chen Jiaqi , Gavryushin Alexey

Topics Information, Computing and Communication Sciences , Engineering and Technology

Flash Storage and USB Communication for Embedded Systems

Rehabilitation Engineering Lab

This project targets performance and feature enhancements to a firmware module that manages data transfers from flash memory to a host PC over USB (CDC/ACM). The goal is to improve system reliability, throughput, and communication efficiency for data logging or diagnostic use cases.

Keywords

embedded, c programming, firmware, flash memory, usb

Labels

Internship , Lab Practice

Description

This project targets performance and feature enhancements to a firmware module that manages data transfers from flash memory to a host PC over USB (CDC/ACM). The goal is to improve system reliability, throughput, and communication efficiency for data logging or diagnostic use cases.

You will work on:

  • Implementing and evaluating a caching layer for the flash translation mechanism to reduce bus utilization
  • Designing and developing a lightweight USB communication protocol with data transmission and acknowledgment features
  • Enhancing the firmware state machine to optimize the flow of data from flash to USB, ensuring robust handling of edge cases

You will gain hands-on experience with low-level embedded development and system design, contributing to a real and evolving codebase used in a real medical device.

Requirements

  • Solid experience with embedded C programming and debugging
  • Understanding of state machine design and resource-constrained systems

Preferred Experience

  • Some familiarity with flash memory interfaces and wear-leveling techniques
  • Experience with Zephyr RTOS

Goal

Tasks

Your Profile

Electrical/Software Engineer, Computer Scientist or proved equivalent experience

Contact Details

More information

Open this project... 

Published since: 2025-06-23

Organization Rehabilitation Engineering Lab

Hosts Viskaitis Paulius

Topics Information, Computing and Communication Sciences , Engineering and Technology

Doxygen Documentation Pipeline for Embedded Firmware

Rehabilitation Engineering Lab

You will help improve the usability and maintainability of Skaaltec’s embedded firmware codebase by integrating automated documentation generation using Doxygen.

Labels

Internship , Lab Practice

Description

Documentation is a crucial part of any high-quality software project. In this internship, you will help improve the usability and maintainability of Skaaltec’s embedded firmware codebase by integrating automated documentation generation using Doxygen.

The goal is to set up a complete documentation pipeline, both locally and in our GitHub-based CI/CD workflows. You will:

  • Configure Doxygen for the project, including layout, style, and output formats (e.g., HTML, LaTeX/PDF)
  • Write and revise in-code documentation (docstrings) for key modules and APIs
  • Ensure documentation is automatically generated and published as part of the CI/CD process
  • Help define a structure that makes the documentation easy to navigate for new developers and collaborators

This project will help bring clarity and accessibility to a growing embedded codebase used in real-world applications, while giving you hands-on experience with tooling, automation, and technical writing.

Requirements

  • Basic familiarity with Git and GitHub workflows
  • Experience reading and writing embedded C code
  • Interest in technical documentation and developer tooling

Preferred Experience

  • Previous use of Doxygen or similar documentation generators
  • Exposure to CI/CD tools like GitHub Actions
  • Understanding of software architecture and code organization in embedded systems is a plus

Goal

Tasks

Your Profile

Electrical/Software Engineer, Computer Scientist or proved equivalent experience

Contact Details

More information

Open this project... 

Published since: 2025-06-23

Organization Rehabilitation Engineering Lab

Hosts Viskaitis Paulius

Topics Information, Computing and Communication Sciences

Developing Multi-Functional Microrobots Using Microfluidic Chips (3M project)

Multiscale Robotics Lab

We are looking for a motivated Master’s student to join an exciting interdisciplinary thesis project, collaborating between the Multi-Scale Robotics Lab (D-MAVT) and the deMello group (D-CHAB) at ETH Zurich. This project focuses on creating a novel microfluidic-based bottom-up method to fabricate multifunctional microrobots. This innovative approach seeks to revolutionize microrobot fabrication, opening the door to diverse new applications.

Keywords

Microfluidics, Self-assembly, Microrobots

Labels

Master Thesis , ETH Zurich (ETHZ)

Description

Background

Microrobots have immense potential in fields such as biomedicine and environmental remediation. However, their development has been hindered by limitations in integrating multiple functional components effectively. Current top-down fabrication methods, e.g. photolithography or 3D printing, struggle to combine diverse functional components, restricting the versatility and performance of microrobots.

To overcome these challenges, this project will develop a novel bottom-up microfluidic assembly method, enabling the creation of multifunctional microrobots with unprecedented precision and flexibility. This innovative approach has the potential to redefine microrobot fabrication and expand their applications significantly.

Ideal Skills and Experience (not mandatory):

• Experience or knowledge in microfluidic devices design and operation.

• Prior experience in chemistry lab.

References

M. Hu et al. "Shaping the assembly of superparamagnetic nanoparticles." Mater. Horiz. 9.6 (2022), 1641.

B. J. Nelson & S. Pané “Delivering drugs with microrobots.” Science 382.6675 (2023), 1120.

T. Moragues et al. “Droplet-Based Microfluidics”. Nature Reviews Methods Primers 3.1 (2023), 32.

Goal

The goal of this project is to develop a novel bottom-up microfluidic assembly method for creating multi-functional microrobots with enhanced precision and flexibility. This approach aims to overcome current limitations in integrating diverse functional components, paving the way for advanced applications in biomedicine and environmental remediation.

Contact Details

Our project is highly interdisciplinary and embodies a high-impact, high-reward research approach. Your work could lead to pioneering discoveries and applications in microrobotics. If you are interested, please contact Dr. Minghan Hu (minghu@ethz.ch) and Chao Song (chao.song@chem.ethz.ch) for more details about the Master thesis.

More information

Open this project... 

Published since: 2025-06-22 , Earliest start: 2025-02-17

Organization Multiscale Robotics Lab

Hosts Hu Minghan

Topics Engineering and Technology , Chemistry

Measurement of hip internal rotation range of motion in individuals with hip joint disorders

Sensory-Motor Systems Lab

Together with the Schulthess Clinic, we have developed the mHIRex which, based on the common clinical manoeuvre, precisely determines the force required for internal hip rotation. The next step is to assess hip internal rotation in a large cohort of patients with hip disorders.

Keywords

hip osteoarthritis, femoroacetabular impingement syndrome, clinical examination, biomechanics

Labels

Internship , Master Thesis

Project Background

Hip osteoarthritis (OA) can cause pain and disability, from minor activity limitations to severe participation restrictions. Joint stiffness and reduced range of motion are among the most common functional impairments, which impact function and quality of life. A restricted hip internal rotation (often symptomatic) is one of the typical findings in the clinical examination of patients with hip OA. Morphology deformities (cam/pincer) of the hip joint, which have been found be common in young active individuals, represent potential risk factors for hip OA. Individuals diagnosed with femoroacetabular impingement syndrome (FAIS), as an example, young athletes with cam morphology involved in vigorous sport activities, may present with a limitation in hip internal rotation. Recently, it has been found in 244 asymptomatic young males that the standardised assessment (with an examination chair) of internal hip rotation did not allow the exclusion of a cam morphology, but might be used to rule one in. These preliminary results can support the use of a standardised assessment of internal hip rotation in screening of youth and adolescents involved in high-risk sports (i.e., ice hockey, gymnastics, martial arts) and in the documentation of hip range of motion in individuals with other hip disorders. The ETH SMS Lab has developed and pilot tested a new mobile device (called “mHIRex”) to examine hip internal rotation while also recording the torque needed for/produced by the hip against the movement (see Figure 1). This device was found to be accurate in the measurement of both hip motion and torque. As only a small number of subjects (24 healthy individuals, 14 elite athletes, and 4 patients with diagnosed FAIS) were involved in the feasibility study, no conclusions from the collected data could be drawn. The next step would be to assess hip internal rotation in a large cohort of consecutive patients with hip disorders (incl. OA, FAIS) at the Schulthess Clinic.

Your Task

  • Get familiar with the relevant literature, talk to clinicians
  • Elaborate the measurement device
  • Run study, analyse data, present results

Your Benefits

  • Exchange with very experienced clinicians
  • Being involved in translational science

Your Profile

  • Background in Human Movement Science
  • Interested in clinical biomechanics

Contact Details

Interested? Please send your motivation, CV and latest transcript to Peter Wolf, pwolf@ethz.ch

More information

Open this project... 

Published since: 2025-06-20 , Earliest start: 2025-09-24 , Latest end: 2026-12-31

Applications limited to Department of Health Sciences and Technology , Department of Mechanical and Process Engineering

Organization Sensory-Motor Systems Lab

Hosts Wolf Peter

Topics Medical and Health Sciences , Engineering and Technology

Wearable kirigami antenna for motion monitoring

Biomedical and Mobile Health Technology Lab

The aim of the project is to develop a simple method for fabrication of kirigami-inspired laser-cut or molded antennas on flexible substrates. This technology will enable advancements in wearable electronics for wireless communication and sensing applications.

Keywords

wearable, flexible electronics, kirigami, laser cutting, 3D printing, antenna design, conductivity, wireless communication

Labels

Semester Project , Bachelor Thesis , Master Thesis

Description

Kirigami is the Japanese art of cutting paper, similar to origami, which is the art of folding paper. Innovations using kirigami-inspired designs and metamaterials have been made in recent years, offering promising approaches for creating stretchable and adaptable structures in wearable electronics. Innovative ways of integrating wearables into our daily life and maximizing the functionality of such structures require new fabrication methods that are both cost-effective and versatile. In the present project, a new approach utilizing laser-cutting and 3D printing for creating kirigami-inspired antennas on flexible substrates is being developed. Soft substrates will be patterned to form kirigami structures that can be integrated into textiles. Simultaneously, the antenna element will be embedded into the soft substrate using an insert molding technique. The resulting kirigami antennas are designed for applications such as wireless communication and motion monitoring, making them suitable for integration into garments and other wearable platforms. The ability of these antennas to maintain performance under mechanical deformation makes them particularly advantageous for wearable health monitoring systems.

Goal

  • Develop a method for the production of kirigami
  • Fabricate functional wearable antennas
  • Write a project report

Tasks

  • Literature review (10%)
  • Optimization of the fabrication of kirigamis (50%)
  • Characterization of the produced kirigami (30%)
  • Data collection, reporting and presentation (10%)

Your profile

  • Background in Applied Physics, Health Science, Mechanical Engineering, Materials Science, Electronics Engineering, Biomedical Engineering or related fields is desirable but not mandatory.
  • Independent worker with critical thinking and problem-solving skills

Contact Details

Prof Dr Carlo Menon, Dr. Muhammad Zada and Pierre Kateb will supervise the student and the research will be performed at ETH Zurich’s Biomedical and Mobile Health Technology research group (www.bmht.ethz.ch) in the Balgrist Campus in Zurich, Switzerland.

To apply, use the button below to tell us why you want to do this project ("motivation"); attach a CV with your current program of study, your grades and any other info you deem relevant.

More information

Open this project... 

Published since: 2025-06-18 , Earliest start: 2025-03-24 , Latest end: 2026-08-31

Organization Biomedical and Mobile Health Technology Lab

Hosts Kateb Pierre

Topics Engineering and Technology

RL Finetuning for Generalized Quadruped Locomotion

Robotic Systems Lab

This project investigates the potential of reinforcement learning (RL) fine-tuning to develop a single, universal locomotion policy for quadruped robots. Building on prior work in multi-terrain skill synthesis [1], we will probe the limits of generalization by systematically fine-tuning on an ever-expanding set of diverse environments. This incremental approach will test the hypothesis that a controller can learn to robustly navigate a vast range of terrains. As a potential extension, procedural terrain generation may be used to automatically create novel challenges, pushing the boundaries of policy robustness.

Keywords

Reinforcement Learning, Quadruped Locomotion

Labels

Master Thesis

Description

Prior work has successfully synthesized multi-terrain skills into a single controller using RL fine-tuning [1]. This project seeks to explore the ultimate scalability of this method: can a policy achieve universal competence through continued training? The primary objective is to incrementally fine-tune the policy on a growing collection of challenging real-world and simulated terrains. This process will systematically evaluate performance to determine if and when generalization capabilities begin to plateau. A further objective could be to explore procedural terrain generation. This would involve creating terrains that specifically target the policy's weaknesses, providing an efficient path to improved robustness. This project will provide key insights into the requirements for creating truly generalist robotic controllers.

References

  • [1] Rudin, N., He, J., Aurand, J., & Hutter, M. (2025). Parkour in the Wild: Learning a General and Extensible Agile Locomotion Policy Using Multi-expert Distillation and RL Fine-tuning. arXiv preprint arXiv:2505.11164.

Work Packages

  • Literature Research
  • Implementation of a scalable finetuning environment
  • Scientific evaluation of the scalability of RL finetuning

Requirements

  • Excellent knowledge of Python
  • Experience with Reinforcement Learning

Contact Details

Please send your CV, transcript and a short motivation to:

cschwarke@ethz.ch junzhe@ethz.ch

Student(s) Name(s)

Project Report Abstract

More information

Open this project... 

Published since: 2025-06-17 , Earliest start: 2025-06-15

Organization Robotic Systems Lab

Hosts Schwarke Clemens , He Junzhe

Topics Information, Computing and Communication Sciences

Differentiable Simulation for Precise End-Effector Tracking

Robotic Systems Lab

Unlock the potential of differentiable simulation on ALMA, a quadrupedal robot equipped with a robotic arm. Differentiable simulation enables precise gradient-based optimization, promising greater tracking accuracy and efficiency compared to standard reinforcement learning approaches. This project dives into advanced simulation and control techniques, paving the way for improvements in robotic trajectory tracking.

Keywords

Differentiable Simulation, Learning, ALMA

Labels

Semester Project , Bachelor Thesis , Master Thesis

Description

Differentiable simulation [1] has demonstrated significant improvements in sample efficiency compared to traditional reinforcement learning approaches across various applications, including legged locomotion [2]. This project seeks to explore another key advantage of differentiable simulation: its capability for more precise optimization. The study will focus on a tracking task involving ALMA, a quadrupedal robot equipped with a robotic arm. The primary objectives are to develop a differentiable simulation environment for the robot and evaluate its advantages over traditional reinforcement learning methods. By utilizing the gradients provided by the simulation, control policies will be optimized to improve tracking performance. The work involves creating a tailored differentiable simulation, systematically comparing its performance with reinforcement learning techniques, and analyzing its impact on accuracy and real-world applicability. This project provides an opportunity to contribute to advanced research in robotics by combining theoretical insights with practical implementation.

References

  • [1] H. J. Suh, M. Simchowitz, K. Zhang, and R. Tedrake, “Do differentiable simulators give better policy gradients?” in InternationalConference on Machine Learning. PMLR, 2022, pp. 20 668–20 696.
  • [2] Schwarke, C., Klemm, V., Tordesillas, J., Sleiman, J. P., & Hutter, M. (2024). Learning Quadrupedal Locomotion via Differentiable Simulation. arXiv preprint arXiv:2404.02887

Work Packages

  • Literature research
  • Implementation of a differentiable simulation environment for ALMA
  • Training and evaluation of tracking policies

Requirements

  • Excellent knowledge of Python
  • Background in Simulation or Learning

Contact Details

Please send your CV, transcript and a short motivation (4-5 sentences max.) to:

cschwarke@ethz.ch vklemm@ethz.ch mittalma@ethz.ch

Student(s) Name(s)

Project Report Abstract

More information

Open this project... 

Published since: 2025-06-17 , Earliest start: 2025-01-27

Organization Robotic Systems Lab

Hosts Mittal Mayank , Schwarke Clemens , Klemm Victor

Topics Information, Computing and Communication Sciences

AI-Driven Push Notifications for Monitoring and Enhancing Adherence in At-Home Neurorehabilitation

Rehabilitation Engineering Lab

Adherence to rehabilitation therapy is crucial for the recovery of hand functionality in stroke and traumatic brain injury (TBI) patients. However, sustaining patient motivation to train at home remains a challenge. This project aims to explore the impact of push notifications delivered via LLM chatbots on adherence to physical therapy among stroke and TBI patients. By investigating the optimal frequency and content of notifications, the goal is to develop an AI-driven notification/reminder system that fosters continuous engagement with the rehabilitation plan, ultimately promoting increased therapy and better functional outcomes for patients.

Keywords

App Development, Stroke, Traumatic Brain Injury, Rehabilitation, Adherence to Therapy, Push Notifications, mHealth Apps, Large Language Models, Interdisciplinary Research, React Native

Labels

Semester Project , Bachelor Thesis , Master Thesis

Description

Stroke and traumatic brain injury (TBI) are debilitating conditions that often result in long-term physical and cognitive impairments. While rehabilitation therapy in the clinic plays a vital role in promoting recovery, maintaining patient adherence to therapy protocols, especially outside clinical environments, poses significant challenges. Leveraging mobile health technologies is an opportunity to enhance patient engagement and adherence to rehabilitation activities, for example, through smart push notifications triggered by the amount of therapy patients do with robotic devices.

In this project, we aim to investigate the impact of AI-driven push notifications on adherence to physical therapy among stroke and TBI patients. Collaborating with interdisciplinary teams, including clinicians, software developers, and researchers, the student will explore the effectiveness of push notifications in promoting sustained engagement with the at-home rehabilitation plan. The project will involve developing a smart push notifications system for our RehabCoach mobile application, collecting and analyzing data on the influence of push notifications on adherence to physical therapy in an unsupervised setting, and comparing it with real-time data of patients' interactions with ReHandyBot, the upper-limb rehabilitation robot used for therapy.

Goal

The student will conduct a comprehensive literature review to understand existing research on the efficacy of push notifications in healthcare contexts and their potential application in stroke and TBI unsupervised rehabilitation. Using this knowledge, they will design and implement a push notification system tailored to the needs of stroke and TBI patients in the RehabCoach app. The system will allow for personalized notification schedules and content customization based on patient preferences and therapy goals. By exploring the role of push notifications in promoting adherence to rehabilitation therapy, this project aims to contribute to the development of innovative solutions for improving patient outcomes in stroke and traumatic brain injury rehabilitation.

Tasks

  1. Conduct a literature review on push notification effectiveness in healthcare and rehabilitation.
  2. Collaborate with clinicians and researchers to understand patient needs and therapy objectives.
  3. Design and develop a push notification system for the RehabCoach app and integrate it with the existing LLM-based chatbots.
  4. Implement data collection mechanisms to track patient interaction and adherence rates.
  5. Analyze collected data to assess the impact of push notifications on rehabilitation therapy adherence.
  6. Iterate the notification system based on feedback from patients and clinicians.
  7. Document findings and contribute to research publications in relevant academic journals or conferences.

Your Profile

We are seeking a highly motivated master’s student or graduate with a background in software development, computer science, or a related field. The ideal candidate should have a keen interest in mobile health technologies and interdisciplinary research. Strong programming skills and the ability to work collaboratively in a team setting are essential. Additionally, candidates with an understanding of healthcare systems and patient-centered design principles will be preferred.

Contact Details

More information

Open this project... 

Published since: 2025-06-17 , Earliest start: 2025-06-22 , Latest end: 2026-09-01

Organization Rehabilitation Engineering Lab

Hosts Retevoi Alexandra

Topics Medical and Health Sciences , Information, Computing and Communication Sciences , Engineering and Technology

Learning a Simulation-Trained Safety Critic for Safe Online Learning in Legged Robots

Robotic Systems Lab

This project focuses on developing a safety critic—a model that predicts the safety of robot states—to enable safe online learning on legged robotic hardware. The safety critic is trained in simulation using labeled data from diverse robot behaviors, identifying states likely to lead to failure (e.g., falls). Once trained, the critic is deployed alongside a learning policy to restrict unsafe exploration, either by filtering dangerous actions or shaping the reward function. The goal is to allow adaptive behavior on real hardware while minimizing physical risk.

Keywords

safety critic, online learning

Labels

Master Thesis

Description

Work packages

Literature research

Understand the training pipeline of the paper Robotic World Model: A Neural Network Simulator for Robust Policy Optimization in Robotics.

Explore the possibility of using a first-order gradient in optimizing the policy.

Requirements

Strong programming skills in Python

Experience in machine learning frameworks, especially model-based reinforcement learning.

Publication

This project will mostly focus on simulated environments. Promising results will be submitted to machine learning conferences, where the method will be thoroughly evaluated and tested on different systems (e.g., simple Mujoco environments to complex systems such as quadrupeds and bipeds).

Related literature

Hafner, D., Lillicrap, T., Ba, J. and Norouzi, M., 2019. Dream to control: Learning behaviors by latent imagination. arXiv preprint arXiv:1912.01603.

Hafner, D., Lillicrap, T., Norouzi, M. and Ba, J., 2020. Mastering atari with discrete world models. arXiv preprint arXiv:2010.02193.

Hafner, D., Pasukonis, J., Ba, J. and Lillicrap, T., 2023. Mastering diverse domains through world models. arXiv preprint arXiv:2301.04104.

Li, C., Stanger-Jones, E., Heim, S. and Kim, S., 2024. FLD: Fourier Latent Dynamics for Structured Motion Representation and Learning. arXiv preprint arXiv:2402.13820.

Song, Y., Kim, S. and Scaramuzza, D., 2024. Learning Quadruped Locomotion Using Differentiable Simulation. arXiv preprint arXiv:2403.14864.

Li, C., Krause, A. and Hutter, M., 2025. Robotic World Model: A Neural Network Simulator for Robust Policy Optimization in Robotics. arXiv preprint arXiv:2501.10100.

Work Packages

Requirements

Contact Details

Please include your CV and transcript in the submission.

Chenhao Li

https://breadli428.github.io/

chenhli@ethz.ch

Student(s) Name(s)

Project Report Abstract

More information

Open this project... 

Published since: 2025-06-16

Organization Robotic Systems Lab

Hosts Li Chenhao , Li Chenhao , Li Chenhao , Li Chenhao

Topics Engineering and Technology

Data Driven Simulation for End-to-End Navigation

Robotic Systems Lab

Investigate how neural rendering can become the backbone of comprehensive, next generation data-driven simulation

Keywords

Neural rendering, Simulation

Labels

Internship , Master Thesis

Description

Simulation-based training of locomotion and environment interaction policies have recently shown tremendous success in pushing the abilities of real-world robots. Using massive parallelization, simulation-based learning enables robots to quickly learn new skills without involving the time and hardware investments attached to trying out things in the real world. However, one existing challenge is that such simulators are currently focusing on physics while the simulation of perception readings is often limited to simple geometry. In order to for example support end-to-end, vision-based models, we'd like to add realistic image rendering in complex, realistic environments to such simulators.

In this project, we'd like to explore a data-driven approach to add such capabilities to a simulator. Specifically, neural rendering methods have made large progress in recent years and their use in simulators for training and validation is now actively being investigated. Challenges that need to be addressed are given by 1) runtime considerations for efficient use inside a simulator, 2) artifact-free rendering of novel views, and 3) the imposition of physical constraints such as watertight meshes or the structural stability of static environment reconstructions.

The project is conducted at The AI Institute, a recently established top robotics research institute created by the founders of Boston Dynamics.

References

[1] Neuralangelo: High-Fidelity Neural Surface Reconstruction, CVPR 2023 [2] ViPlanner: Visual Semantic Imperative Learning for Local Navigation, ICRA 2024 [3] OmniRe: Omni Urban Scene Reconstruction, arxiv 2024 [4] 3D Gaussian Splatting for Real-Time Radiance Field Rendering, Siggraph 2023

Work Packages

-Literature research -Adding existing rendering functionality to simulator (similar to [2]) -Incorporate gaussian splatting based rendering into simulator -Improve gaussian splatting for the use in simulators (render quality, mesh extraction, …) -Setup validation pipeline for simulator (validate end-to-end policy, or VIO, …)

Requirements

Excellent knowledge of Python Computer vision experience Knowledge of neural rendering methods

Contact Details

Alexander Liniger (aliniger@theaiinstitute.com) Igor Bogoslavskyi (ibogoslavskyi@theaiinstitute.com)

Please include up-o-date CV and transcript

Student(s) Name(s)

Project Report Abstract

More information

Open this project... 

Published since: 2025-06-16 , Earliest start: 2025-01-27

Organization Robotic Systems Lab

Hosts Kneip Laurent

Topics Information, Computing and Communication Sciences , Engineering and Technology

Event-based feature detection for highly dynamic tracking

Robotic Systems Lab

Event cameras are an exciting new technology enabling sensing of highly dynamic content over a broad range of illumination conditions. The present thesis explores novel, sparse, event-driven paradigms for detecting structure and motion patterns in raw event streams.

Keywords

Event camera, neuromorphic sensing, feature detection, computer vision

Labels

Master Thesis

Description

Event cameras are a relatively new, vision-based exteroceptive sensor relying on standard CMOS technology. Unlike normal cameras, event cameras do not measure absolute brightness in a frame-by-frame manner, but relative changes of the pixel-level brightness. Essentially, every pixel of an event camera independently observes the local brightness pattern, and when the latter experiences a relative change of minimum amount with respect to a previous value, a measurement is triggered in the form of a time-stamped event indicating the image location as well as the polarity of the change (brighter or darker) [2]. The pixels act asynchronously and can potentially fire events at a very high rate. Owing to their design, event cameras do not suffer from the same artifacts as regular cameras, but continue to perform well under high dynamics or challenging illumination conditions.

Event cameras currently enjoy growing popularity and they represent a new, interesting alternative for exteroceptive sensing in robotics when facing scenarios with high dynamics and/or challenging conditions. The focus of the present thesis lies on 3D motion estimation with event cameras, and in particular aims at event-driven, computationally efficient methods that can trigger motion hypotheses from sparse raw events. Initial theoretical advances in this direction have been presented in recent literature [3,4,5], though these methods are still limited in terms of the assumptions that they make. The present thesis will push the boundaries by proposing novel both geometry and learning-based representations.

The proposed thesis will be conducted at the Robotics and AI Institute, a new top-notch partner institute of Boston Dynamics pushing the boundaries of control and perception in robotics. Selection is highly competitive. Potential candidates are invited to submit their CV and grade sheet, after which students will be invited to an on-site interview.

[1] Event-Based, 6-DOF Camera Tracking from Photometric Depth Maps, TPAMI 40(10):2402-2412, 2017

[2] The Silicon Retina, 264(5): 76-83, 1991

[3] A 5-Point Minimal Solver for Event Camera Relative Motion Estimation. In Proceedings of the International Conference on Computer Vision (ICCV), 2023

[4] An n-point linear solver for line and motion estimation with event cameras. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2024.

[5] Full-DoF Egomotion Estimation for Event Cameras Using Geometric Solvers, Arxiv: https://arxiv.org/html/2503.03307v1

Work Packages

● Literature research

●​ Extend the mathematical foundation for sparse event-based motion estimation

●​ Propose novel detectors that extend operability from lines and constant velocity motion to full 6 DoF motion estimation from either points or lines, or other specific object trajectories such as ballistic curves

●​ Investigate learning-based, sparse event-based motion detectors to handle more general cases

●​ Apply the technology to real-world data to track fast ego-motion or ballistic object motion in the environment

Requirements

●​ Excellent knowledge of C++

●​ Computer vision experience

●​ Knowledge of geometric computer vision

●​ Plus: Experience with event cameras

Contact Details

Laurent Kneip (lkneip@theaiinstitute.com)

Please include your CV and up-​to-date transcript.

Student(s) Name(s)

Project Report Abstract

More information

Open this project... 

Published since: 2025-06-16 , Earliest start: 2025-03-17

Organization Robotic Systems Lab

Hosts Kneip Laurent

Topics Engineering and Technology

Soft object reconstruction

Robotic Systems Lab

This project consists of reconstructing soft object along with their appearance, geometry, and physical properties from image data for inclusion in reinforcement learning frameworks for manipulation tasks.

Keywords

Computer Vision, Structure from Motion, Image-based Reconstruction, Physics-based Reconstruction

Labels

Master Thesis

Description

As 3D reconstruction [2,3], real-time data-driven rendering [4,5], and learning-based control technologies [6,7] are becoming more mature, recent efforts in reinforcement learning are moving towards end-to-end policies that directly consume images in order to generate control commands [8]. However, many of the simulated environments are limited to a composition of rigid objects. In recent years, the inclusion of differentiable particle-based simulation borrowed from computer graphics has enabled the inclusion of non-rigid or even fluid elements. Ideally, we can generate such representations from real world data in order to extend data-driven world simulators to arbitrary new objects with complex physical behavior.

The present thesis focuses on this problem and aims at reconstructing soft objects in terms of their geometry, appearance, and physical behavior. The goal is to make use of the Material Point Method (MPM) in combination with vision-based cues and physical priors in order to reconstruct accurate 3D models of soft objects. The developed models will finally be included into an RL learning environment such as Isaac-Gym in order to train novel manipulation policies for soft objects.

The proposed thesis will be conducted at the Robotics and AI Institute, a new top-notch partner institute of Boston Dynamics pushing the boundaries of control and perception in robotics. Selection is highly competitive. Potential candidates are invited to submit their CV and grade sheet, after which students will be invited to an on-site interview.

[1] Modeling of Deformable Objects for Robotic Manipulation: A Tutorial and Review, Front. Robot. AI, 7, 2020

[2] Global Structure-from-Motion Revisited, ECCV 2024

[3] MASt3R-SLAM: Real-Time Dense SLAM with 3D Reconstruction Priors, CVPR 2025

[4] NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis, CVPR 2020

[5] 3D Gaussian Splatting for Real-Time Radiance Field Rendering, SIGGRAPH 2023

[6] Learning to walk in minutes using massively parallel deep reinforcement learning, CoRL 2022

[7] Champion-Level Drone Racing Using Deep Reinforcement Learning, Nature, 2023

[8] π0: A Vision-Language-Action Flow Model for General Robot Control, Arxiv: https://arxiv.org/abs/2410.24164

Work Packages

● ​Literature research

●​ Design of suitable reconstruction method based on visual data and physical priors

●​ Dataset collection and testing

●​ Cross-validation against contact-based reconstruction methods

●​ Embedding into Isaac-Gym for training novel manipulation policies

Requirements

● ​Excellent knowledge of Python or C++

●​ Computer vision experience

●​ Interest in optimization with physics representations

Contact Details

Laurent Kneip (lkneip@theaiinstitute.com)

Sina Mirrazavi (smirrazavi@theaiinstitute.com)

Please include your CV and up-​to-date transcript.

Student(s) Name(s)

Project Report Abstract

More information

Open this project... 

Published since: 2025-06-16 , Earliest start: 2025-03-17

Organization Robotic Systems Lab

Hosts Kneip Laurent

Topics Engineering and Technology

Utilizing the human body for ambient electromagnetic energy harvesting

Biomedical and Mobile Health Technology Lab

The goal of the project is to develop wearable devices, for use in environmental electromagnetic energy recovery based on human body application.

Keywords

Flexible electronics, electromagnetic energy harvesting

Labels

Semester Project , Bachelor Thesis , Master Thesis

Description

The proposed project aims to develop a flexible energy harvesting device that utilizes human body to scavenge ambient electromagnetic (EM) energy. With the rapid expansion of wireless technologies and ubiquitous EM radiation in urban environments, this project explores a novel and sustainable approach to convert ambient EM energy into usable electrical power for low-energy wearable electronics. Unlike conventional rectenna-based harvesters, this approach leverages the human body as a functional dielectric medium to enhance polarization-induced charge accumulation at the electrode interface.

Methodology

  • Materials selection and device fabrication.

  • Electrical performance characterization is based on oscilloscope and other equipment.

  • PCB circuit design and fabrication.

Goal

  • Low contact impedance interface device design and fabrication.

  • Electrical and mechanical characterization and performance tuning: evaluate the output electrical signal under different parameters.

  • Design and processing of back-end processing circuits.

  • Application exploration: Integrate the designed device with wearable devices.

  • Write scientific project report / research articles (if possible)

Tasks

  • Literature review (10%)

  • Device design and fabrication (30%)

  • Characterization and performance tuning (20%)

  • Application implementation (30%)

  • Reporting and presentation (10%)

Your Profile

  • Background in Applied Physics, Mechanical Engineering, Chemical Engineering, Materials Science, Electronics Engineering, Biomedical Engineering or related fields

  • Independent worker with critical thinking and problem-solving skills

Contact Details

Prof Dr Carlo Menon, Dr. Weifeng Yang and Yuanlong Li will supervise the student and the research will be performed at ETH Zurich’s Biomedical and Mobile Health Technology research group (www.bmht.ethz.ch) in the Balgrist Campus in Zurich, Switzerland.

To apply, use the button below to tell us why you want to do this project, attach a CV with your current program of study, your grades and any other info you deem relevant. Please include length of time that your project or thesis will occupy (i.e. 2 months, 6 months, etc).

More information

Open this project... 

Published since: 2025-06-15 , Earliest start: 2025-06-20

Organization Biomedical and Mobile Health Technology Lab

Hosts Li Yuanlong

Topics Engineering and Technology

Vision-Based Agile Aerial Transportation

Robotics and Perception

Develop a vision-based aerial transportation system with reinforcement / imitation learning.

Keywords

aerial transportation, reinforcement learning (RL), drones, robotics

Labels

Master Thesis

Description

Transporting loads with drones is often constrained by traditional control systems that rely on predefined flight paths, GPS, or external motion capture systems. These methods limit a drone's adaptability and responsiveness, particularly in dynamic or cluttered environments. Vision-based control has the potential to revolutionize aerial transportation by enabling drones to perceive and respond to their surroundings in real-time. Imagine a drone that can swiftly navigate through complex environments and deliver payloads with precision using only onboard vision sensors. Applicants are expected to be proficient in Python, C++, and Git.

Goal

This project aims to develop a vision-based control system for drones capable of agile and efficient aerial transportation. The system will leverage real-time visual input to dynamically adapt to environmental conditions, navigate obstacles, and manage load variations with reinforcement or imitation learning.

Contact Details

Interested candidates should send their CV, transcripts (bachelor and master), and descriptions of relevant projects to Ismail Geles [geles (at) ifi (dot) uzh (dot) ch], Leonard Bauersfeld [bauersfeld (at) ifi (dot) uzh (dot) ch], Angel Romero [roagui (at) ifi (dot) uzh (dot) ch]

More information

Open this project... 

Published since: 2025-06-12 , Earliest start: 2025-05-01 , Latest end: 2026-02-28

Organization Robotics and Perception

Hosts Geles Ismail

Topics Information, Computing and Communication Sciences , Engineering and Technology

Personalized Low Latency Interactive AI Project

Sensory-Motor Systems Lab

We are seeking one highly motivated student to join our innovative project focused on developing a cutting-edge voice recognition and personalization platform for wheelchair users This project aims to deliver low-latency, context-aware, and personalized AI interactions in noisy, multi-user environments, leveraging advanced models and distilled LLMs, combined with biosignal tracking and GDPR-compliant data management.

Keywords

Voice recognition, AI personalization, low latency, LLMs, biosignal tracking, neurofeedback, multi-user environments, audio processing

Labels

Semester Project , Internship , Master Thesis

Project Background

The Personalized Low Latency Interactive AI project addresses the limitations of current voice recognition systems, which often struggle with high latency, limited personalization, and poor performance in noisy settings.The project integrates robust speech-to-text conversion, Flash Attention-2 for low-latency processing, and advanced distilled large language models (LLMs) for real-time personality detection and response generation. The system ensures precise speaker identification and tailored interactions. User-specific memories are stored in a GDPR-compliant local database, enabling dynamic personalization based on preferences and personality traits. The project aims to revolutionize voice-based interactions in office and public settings, with a focus on scalability, accessibility, and sustainability.The Personalized Low Latency Interactive AI project aims to create a highly personalized, low-latency AI agent tailored for wheelchair users to support their mental well-being. Deployed on an Nvidia Orin 64GB development kit, the platform ensures seamless, real-time responses in noisy, multi-user environments, prioritizing accessibility and user engagement.

Your Task

You will focus on integrating personality trait and preference estimation into the distilled LLM response system to create a personalized AI agent for wheelchair users.You will work on enhancing the system’s ability to learn user-specific memories through interactions, using Mediapipe for lip-reading and Insightface for person identification, while optimizing latency on the Nvidia Orin 64GB development kit.

  • Implement personality trait and preference estimation using distilled LLMs.
  • Integrate Lip-reading and Person identification for accurate memory retrieval.
  • Optimize system latency for seamless, real-time responses on Nvidia Orin hardware.
  • Test and validate the personalization pipeline for wheelchair users’ mental well-being.
  • Develop and test memory-based personalization algorithms in Months 2–3.
  • Integrate lip-reading and person identification modules on Nvidia Orin in Months 1–2.
  • Optimize latency and validate system performance in Months 4–5.
  • Prepare a thesis or report documenting findings in Month 6.

Your Benefits

This role offers a unique opportunity to work with cutting-edge AI models, lip-reading, and person identification technologies while contributing to accessibility and social impact.

  • Gain hands-on experience with distilled LLMs, Speech-to-Text and Person Identification Neural Networks.
  • Develop expertise in low-latency AI systems and personalization algorithms.
  • Collaborate with a multidisciplinary team of AI and accessibility experts.
  • Produce a thesis or report with potential for academic publication.
  • Access to state-of-the-art Nvidia Orin 64GB development kit.
  • Opportunity to contribute to a high-impact project with real-world accessibility applications.
  • Mentorship from researchers in AI and human-computer interaction.

Your Profile

We are looking for a driven candidate with a passion for AI, accessibility, and low-latency systems. The ideal candidate is a student pursuing a thesis or internship with strong technical skills and an interest in enhancing mental well-being through AI.

  • Enrolled in a Master’s program in Computer Science, Engineering, or a related field.
  • Experience with Python and familiarity with LLMs, signal processing, or computer vision.
  • Knowledge of Mediapipe, Insightface, or Nvidia hardware is a plus.
  • Strong problem-solving skills and commitment to accessibility-focused research.
  • Available for a 6-month internship or thesis project.

Contact Details

For inquiries or to apply, please reach out with your CV and a brief cover letter. Applications will be reviewed on a rolling basis, and shortlisted candidates will be invited for an interview.

Email: shreyasvi.natraj@hest.ethz.ch, diego.paez@hest.ethz.ch Start Date: Flexible

More information

Open this project... 

Published since: 2025-06-11 , Earliest start: 2025-08-01

Organization Sensory-Motor Systems Lab

Hosts Paez Diego, Dr.

Topics Engineering and Technology

Learning to Socially Navigate in Crowds using RL

Robotic Systems Lab

This project aims to develop a robotic planner that can safely navigate crowded environments, considering human movement patterns and social norms. It seeks to overcome limitations of current planners, which either require privileged information or can't handle semantic constraints. The goal is to create a robust planner for real robots (ANYmal or Unitree B2W) that works in dynamic, constrained environments. Challenges include training an RL policy, expanding movement patterns, and transferring from simulation to real hardware.

Keywords

Reinforcement Learning, Navigation, Planning, Robotics, Legged Robotics, Simulation

Labels

Master Thesis

Description

Navigating in crowded environments is one of the most challenging robotic tasks due to the dynamic setting and the variety of human movements. This project seeks to overcome the limitations faced by planners in such dynamic environments while, in addition, being exposed to social norms that are crucial for safely (e.g., street crossing on a crosswalk, staying on the sidewalk). While recent advancements in reinforcement-learning-based local planners have demonstrated success in demanding scenarios, they require privileged information about human movements and cannot handle semantic constraints [1]. On the contrary, planners following semantic constraints remain limited to low update rates and static scenes [2]. To address these issues, we started to design a novel planner capable of safely navigating to goal positions while being exposed to different human movement patterns and semantic constraints, which has already achieved promising results. This project aims to advance the work and bring it to the real system (either ANYmal or Unitree B2W). Challenges of this project will include training a stable policy in RL using the IsaacLab framework[3], enhancing the variety of movement patterns, and performing the transfer from simulation to the real hardware.

Work Packages

  • Literature review on crowd and social navigation
  • Planner development using RL in Simulation
  • Integration of ANYmal and real-world evaluation

Requirements

  • Extensive programming experience (Python) with large codebases
  • Experience with deep learning projects (preferably with RL)
  • Knowledge of planning, perception, and robot dynamics is a plus

Contact Details

Please send your CV and TOR to Fan Yang (fanyang1 @ethz.ch) and Pascal Roth (rothpa@ethz.ch)

Student(s) Name(s)

Project Report Abstract

More information

Open this project... 

Published since: 2025-06-11

Applications limited to ETH Zurich , EPFL - Ecole Polytechnique Fédérale de Lausanne

Organization Robotic Systems Lab

Hosts Roth Pascal

Topics Information, Computing and Communication Sciences

Active System Identification for Efficient Online Adaptation

Robotic Systems Lab

This project proposes a novel single-stage training framework for system identification in legged locomotion, addressing limitations in the conventional two-stage teacher-student paradigm. Traditionally, a privileged teacher policy is first trained with full information, followed by a student policy that learns to mimic the teacher using only state-action histories—resulting in suboptimal exploration and limited adaptability. In contrast, our method directly trains a policy to regress privileged information embeddings from its history while simultaneously optimizing for an active exploration objective. This objective is based on maximizing mutual information between the policy’s state-action trajectories and the privileged latent variables, encouraging exploration of diverse dynamics and enhancing online adaptability. The approach is expected to improve sample efficiency and robustness in deployment environments with variable dynamics.

Keywords

Active Exploration, System Identification, Online Adaptation

Labels

Master Thesis

Description

Work packages

Literature research

Implement the teacher-student training baseline.

Implement the proposed active system identification method.

Systematic analysis and comparison between the methods, with visualization of the latent environment embedding.

Hardware deployment.

Requirements

Strong programming skills in Python

Experience in machine learning frameworks, especially reinforcement learning.

Publication

This project will mostly focus on simulated environments. Promising results will be submitted to machine learning conferences, where the method will be thoroughly evaluated and tested on different systems.

Related literature

https://www.science.org/doi/10.1126/scirobotics.aau5872

https://www.science.org/doi/10.1126/scirobotics.abc5986

https://www.science.org/doi/full/10.1126/scirobotics.abk2822

Work Packages

Requirements

Contact Details

Please include your CV and transcript in the submission.

Chenhao Li

https://breadli428.github.io/

chenhli@ethz.ch

Student(s) Name(s)

Project Report Abstract

More information

Open this project... 

Published since: 2025-06-06

Organization Robotic Systems Lab

Hosts Li Chenhao , Li Chenhao , Li Chenhao , Li Chenhao

Topics Engineering and Technology

Learning from Online Demonstrations via Video Diffusion for Local Navigation

ETH Competence Center - ETH AI Center

This project introduces a framework for local navigation skill acquisition through online learning from demonstrations, bypassing the need for offline expert trajectories. Instead of relying on pre-collected data, we use video diffusion models conditioned on semantic text prompts to generate synthetic demonstration videos in real time. These generated sequences serve as reference behaviors, and the agent learns to imitate them via an image-space reward function. The navigation policy is built atop a low-level locomotion controller and targets deployment on legged platforms such as humanoids and quadrupeds. This approach enables semantically guided, vision-based navigation learning with minimal human supervision and strong generalization to diverse environments.

Keywords

Learning from Demonstrations, Video Diffusion, Semantic Conditioning

Labels

Master Thesis

Description

Work packages

Use video diffusion models to generate reference trajectories with semantic conditions.

Train a local navigation policy by learning from demonstrations using the generated trajectories.

Deployment on hardware.

Requirements

Strong programming skills in Python

Experience in reinforcement learning and imitation learning frameworks

Publication

This project will mostly focus on algorithm design and system integration. Promising results will be submitted to robotics or machine learning conferences where outstanding robotic performances are highlighted.

Related literature

Peng, Xue Bin, et al. "Deepmimic: Example-guided deep reinforcement learning of physics-based character skills." ACM Transactions On Graphics (TOG) 37.4 (2018): 1-14.

Li, C., Vlastelica, M., Blaes, S., Frey, J., Grimminger, F. and Martius, G., 2023, March. Learning agile skills via adversarial imitation of rough partial demonstrations. In Conference on Robot Learning (pp. 342-352). PMLR.

Li, Chenhao, et al. "FLD: Fourier latent dynamics for Structured Motion Representation and Learning."

Serifi, A., Grandia, R., Knoop, E., Gross, M. and Bächer, M., 2024, December. Vmp: Versatile motion priors for robustly tracking motion on physical characters. In Computer Graphics Forum (Vol. 43, No. 8, p. e15175).

Fu, Z., Zhao, Q., Wu, Q., Wetzstein, G. and Finn, C., 2024. Humanplus: Humanoid shadowing and imitation from humans. arXiv preprint arXiv:2406.10454.

He, T., Luo, Z., Xiao, W., Zhang, C., Kitani, K., Liu, C. and Shi, G., 2024, October. Learning human-to-humanoid real-time whole-body teleoperation. In 2024 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (pp. 8944-8951). IEEE.

He, T., Luo, Z., He, X., Xiao, W., Zhang, C., Zhang, W., Kitani, K., Liu, C. and Shi, G., 2024. Omnih2o: Universal and dexterous human-to-humanoid whole-body teleoperation and learning. arXiv preprint arXiv:2406.08858.

Albaba, M., Li, C., Diomataris, M., Taheri, O., Krause, A. and Black, M., 2025. Nil: No-data imitation learning by leveraging pre-trained video diffusion models. arXiv preprint arXiv:2503.10626.

Goal

Contact Details

Please include your CV and transcript in the submission.

Chenhao Li

https://breadli428.github.io/

chenhli@ethz.ch

More information

Open this project... 

Published since: 2025-06-06

Organization ETH Competence Center - ETH AI Center

Hosts Li Chenhao , Li Chenhao , Li Chenhao , Li Chenhao

Topics Information, Computing and Communication Sciences

Learning World Models for Legged Locomotion (Structured legged world model)

Robotic Systems Lab

Model-based reinforcement learning learns a world model from which an optimal control policy can be extracted. Understanding and predicting the forward dynamics of legged systems is crucial for effective control and planning. Forward dynamics involve predicting the next state of the robot given its current state and the applied actions. While traditional physics-based models can provide a baseline understanding, they often struggle with the complexities and non-linearities inherent in real-world scenarios, particularly due to the varying contact patterns of the robot's feet with the ground. The project aims to develop and evaluate neural network-based models for predicting the dynamics of legged environments, focusing on accounting for varying contact patterns and non-linearities. This involves collecting and preprocessing data from various simulation environment experiments, designing neural network architectures that incorporate necessary structures, and exploring hybrid models that combine physics-based predictions with neural network corrections. The models will be trained and evaluated on prediction autoregressive accuracy, with an emphasis on robustness and generalization capabilities across different noise perturbations. By the end of the project, the goal is to achieve an accurate, robust, and generalizable predictive model for the forward dynamics of legged systems.

Keywords

forward dynamics, non-smooth dynamics, neural networks, model-based reinforcement learning

Labels

Master Thesis

Description

Work packages

Literature research

Understand the training pipeline of the paper Robotic World Model: A Neural Network Simulator for Robust Policy Optimization in Robotics.

Explore the possibility of using a first-order gradient in optimizing the policy.

Requirements

Strong programming skills in Python

Experience in machine learning frameworks, especially model-based reinforcement learning.

Publication

This project will mostly focus on simulated environments. Promising results will be submitted to machine learning conferences, where the method will be thoroughly evaluated and tested on different systems (e.g., simple Mujoco environments to complex systems such as quadrupeds and bipeds).

Related literature

Hafner, D., Lillicrap, T., Ba, J. and Norouzi, M., 2019. Dream to control: Learning behaviors by latent imagination. arXiv preprint arXiv:1912.01603.

Hafner, D., Lillicrap, T., Norouzi, M. and Ba, J., 2020. Mastering atari with discrete world models. arXiv preprint arXiv:2010.02193.

Hafner, D., Pasukonis, J., Ba, J. and Lillicrap, T., 2023. Mastering diverse domains through world models. arXiv preprint arXiv:2301.04104.

Li, C., Stanger-Jones, E., Heim, S. and Kim, S., 2024. FLD: Fourier Latent Dynamics for Structured Motion Representation and Learning. arXiv preprint arXiv:2402.13820.

Song, Y., Kim, S. and Scaramuzza, D., 2024. Learning Quadruped Locomotion Using Differentiable Simulation. arXiv preprint arXiv:2403.14864.

Li, C., Krause, A. and Hutter, M., 2025. Robotic World Model: A Neural Network Simulator for Robust Policy Optimization in Robotics. arXiv preprint arXiv:2501.10100.

Work Packages

Requirements

Contact Details

Please include your CV and transcript in the submission.

Chenhao Li

https://breadli428.github.io/

chenhli@ethz.ch

Student(s) Name(s)

Project Report Abstract

More information

Open this project... 

Published since: 2025-06-06

Organization Robotic Systems Lab

Hosts Li Chenhao , Li Chenhao , Li Chenhao , Li Chenhao

Topics Engineering and Technology

All-textile Wearable Thermochromic Displays

Biomedical and Mobile Health Technology Lab

The goal of the project is to develop a technology for information display on textile utilizing thermochromism phenomenon.

Keywords

wearable, display, textile, thermochromism, e-textile, fabric

Labels

Bachelor Thesis , Master Thesis

Description

Wearable sensors and electronic components are steadily entering everyday and biomedical settings. Modern advances in this field try to transfer these technologies from silicon embodiments onto fabric platforms. Such e-textiles would allow for non-invasive, unobtrusive, and easy usage of these wearable devices. And while sensors had plenty of success transferring to all-textile format, technologies capable of displaying information on fabric and garments are rare.

The present project aims to develop a technology capable of displaying various information on wearable all-textile platform. For this, a textile material capable of changing colors upon application of heat would be developed. A prototype of a textile display that would utilize these properties would be then assembled using the developed material.

Goal

Goals

• Study the thermochromic phenomena in textiles

• Develop thermochromic textile prototype

• Produce and test a prototype of wearable thermochromic display

• Write a scientific project report

Tasks

• Literature review (10%)

• Thermochromic textile development (40%)

• Wearable display development (40%)

• Data collection and analysis, reporting and presentation (10%)

Your Profile

• Background in Applied Physics, Chemical Engineering, Chemistry, Materials Science, Electronics Engineering, Biomedical Engineering or related fields

• Independent worker with critical thinking and problem-solving skills

Contact Details

Prof Dr Carlo Menon and Dr. Alexander Shokurov will supervise the student and the research will be performed at ETH Zurich’s Biomedical and Mobile Health Technology research group (www.bmht.ethz.ch) in the Balgrist Campus in Zurich, Switzerland.

To apply, use the button below to tell us why you want to do this project ("motivation"); attach a mini CV with your current program of study, your grades and any other info you deem relevant--maybe the name and e-mail of a postdoc or a professor willing to be your reference; and make any further comments ("additional remarks"). Please include length of time that your thesis will occupy (i.e. 6 months, etc), and the earliest date you can start.

More information

Open this project... 

Published since: 2025-06-05 , Earliest start: 2025-06-01

Organization Biomedical and Mobile Health Technology Lab

Hosts Shokurov Aleksandr

Topics Medical and Health Sciences , Engineering and Technology

Mechanophores for advanced wearable strain and pressure sensors

Biomedical and Mobile Health Technology Lab

The goal of the project is to synthesize and characterize a number of small molecules capable of acting as mechanophore addition to various polymers. These polymers would then be used as wearable strain or pressure sensors.

Keywords

mechanophore, polymer, wearable, sensor, color, strain, pressure

Labels

Master Thesis

Description

One of the most important biomedical parameters that can be measured by wearable devices is human body movement. This movement is related limb motion, breathing, speech, heart rate etc. Accurate measurement of these movements requires precise strain and pressure sensors with high sensitivity.

Mechanophores are materials capable of changing physical properties (most often color) in response to local mechanical stimuli, like strain or stress. This is achieved by insertion of stress-responsive molecular units into the polymeric backbone.

In the present project, mechanophoric elastomeric materials would be synthesized to study if the mechanophoric action can enhance sensitivity of strain sensors for wearable applications. First, new approaches for facile upscaled production of mechanophoric elastomers will be developed through synthesis of small molecule functional cross-linkers, and then the developed mechanophore polymers will be applied as active material for electronic strain sensors.

We intend to carry out an exciting multidisciplinary study of the organic synthesis strategies, physico-chemical behavior of the new materials, and their practical applications in wearable sensors.

Goal

Goals

• Synthesize functional mechanophoric cross-linker small molecules;

• Incorporate synthesized mechanophores into elastomers and verify mechanophoric action upon linear strain;

• Validate the effect of mechanophoric units on strain sensing;

• Write a scientific project report;

Tasks

• Literature review (10%)

• Synthesis of functional cross-linkers and elastomers (40%)

• Validation of the effect of mechanophoric behavior on strain sensor properties (40%)

• Reporting and presentation (10%)

Your Profile

• Background in Chemistry, Chemical Engineering, Materials Science, or related fields

• Independent worker with critical thinking and problem-solving skills

Contact Details

Contact Details Prof Dr Carlo Menon and Dr. Alexander Shokurov will supervise the student and the research will be performed at ETH Zurich’s Biomedical and Mobile Health Technology research group (www.bmht.ethz.ch) in the Balgrist Campus in Zurich, Switzerland.

To apply, use the button below to tell us why you want to do this project ("motivation"); attach a CV with your current program of study, your grades and any other info you deem relevant.

More information

Open this project... 

Published since: 2025-06-05 , Earliest start: 2025-06-01 , Latest end: 2026-04-01

Organization Biomedical and Mobile Health Technology Lab

Hosts Shokurov Aleksandr

Topics Engineering and Technology , Chemistry

Point-of-Care Sensor for Urinary Iodine

Biomedical and Mobile Health Technology Lab

The goal of the project is to develop a cheap and disposable sensor capable of determination of iodine levels in human urine for early diagnostic purposes.

Keywords

electrochemistry, iodine, nutrition, health, point of care

Labels

Master Thesis

Description

Proper iodine intake is crucial for optimal health, impacting everything from metabolism to cognitive development in utero, as well as in both children and grown adults. However, assessing iodine levels can be challenging, particularly in point-of-care or remote settings. This project seeks to address this gap by developing a disposable sensor specifically designed to measure urinary iodine levels, providing rapid and accurate insights into patient nutrition.

The proposed project aims to utilize electrochemical detection and synthesis techniques involving carbon- and silver- based nanomaterials to produce and validate sensors for aqueous iodine. This work will include the design and fabrication of the sensor, calibration for iodine detection in water, synthetic urine, and then in real samples. In case of high success, portable electronics capable of driving the developed detection protocol will be also elaborated and validated.

Goal

Goals

• Develop the method for the graphene electrode formation and deposition of nanosilver

• Validate and test the sensor’s accuracy and reliability in measuring urinary iodine levels in synthetic samples of progressive complexity and real samples

• Develop electronic suite capable of driving the developed sensors

• Write a scientific project report

Tasks

• Literature review (10%)

• Optimize the deposition technique and fabrication process for the sensor element (50%)

• Validation of the sensor in synthetic and real life samples (20%)

• Development of signal read-out electronic components (10%)

• Data collection and analysis, reporting and presentation (10%)

Your Profile

• Background in Biomedical technology, Chemistry, Materials Science or related fields

• Prior experience with chemicals and standard chemical laboratory equipment

• Knowledge of electrochemistry and/or electrochemical sensing is highly desirable

• Independent worker with critical thinking and problem-solving skills

Contact Details

Prof Dr Carlo Menon and Dr. Alexander Shokurov will supervise the student and the research will be performed at ETH Zurich’s Biomedical and Mobile Health Technology research group (www.bmht.ethz.ch) in the Balgrist Campus in Zurich, Switzerland. When applying, please describe your motivation and your background in the cover letter, and attach your CV and all the transcripts of the previous studies.

More information

Open this project... 

Published since: 2025-06-05 , Earliest start: 2025-01-01 , Latest end: 2025-10-01

Organization Biomedical and Mobile Health Technology Lab

Hosts Shokurov Aleksandr

Topics Medical and Health Sciences , Engineering and Technology , Chemistry

How to Touch: Exploring Tactile Representations for Reinforcement Learning

Robotic Systems Lab

Developing and benchmarking tactile representations for dexterous manipulation tasks using reinforcement learning.

Keywords

Reinforcement Learning, Dexterous Manipulation, Tactile Sensing

Labels

Semester Project , Bachelor Thesis , Master Thesis

PLEASE LOG IN TO SEE DESCRIPTION

This project is set to limited visibility by its publisher. To see the project description you need to log in at SiROP. Please follow these instructions:

  • Click link "Open this project..." below.
  • Log in to SiROP using your university login or create an account to see the details.

If your affiliation is not created automatically, please follow these instructions: http://bit.ly/sirop-affiliate

More information

Open this project... 

Published since: 2025-06-04 , Earliest start: 2024-12-15 , Latest end: 2025-06-01

Applications limited to ETH Zurich

Organization Robotic Systems Lab

Hosts Bhardwaj Arjun , Zurbrügg René

Topics Information, Computing and Communication Sciences

AI-Driven Rock Reshaping Simulation and Control

Robotic Systems Lab

This project develops an intelligent system for controlling rock fracture by combining finite element analysis (FEM) with machine learning. FEM simulations train a graph neural network (GNN) to predict fracture patterns. A reinforcement learning (RL) agent then uses this predictive GNN to learn optimal actions for guiding fractures towards a desired rock geometry, enabling precise and goal-oriented control.

Keywords

machine learning, deep learning, reinforcement learning, graph neural networks, construction robotics, space robotics

Labels

Semester Project , Master Thesis

Description

This project addresses a novel approach for simulating and controlling rock fracture processes using a combination of finite element analysis (FEM), supervised learning, and reinforcement learning (RL). The goal is to develop an intelligent system capable of predicting fracture patterns and actively guiding the fracturing process towards a desired rock geometry.

The pipeline begins with a fracture simulation based on FEM, which models the physical stresses and crack propagation given the original rock geometry [1]. The data generated from various of these simulations is then used to train a graph neural network (GNN) via supervised learning. This GNN learns to predict the fracture behavior and resulting rock shape based on initial conditions and applied forces, and generalizes across geometries and materials [2]. Concurrently, an RL agent is developed. This agent's objective is to determine the optimal actions (e.g., applied forces or drilling patterns) to achieve a desired rock geometry. The RL agent learns through interaction with a training environment which leverages the pre-trained GNN to rapidly simulate the outcomes of the agent's actions on the rock geometry.

By integrating FEM-based simulation with advanced machine learning techniques, this approach aims to create a powerful tool for understanding and controlling rock fracturing, with potential applications in fields such as mining, demolition and large scale construction robotics [3]. The GNN enables accurate and fast prediction of fracture mechanics on any geometries, while the RL agent provides intelligent, goal-oriented control over the fracturing process.

[1] Li, J., Dai, L., Wang, S., Liu, Y., Sun, Y., Wang, J. and Zhang, A., 2024. Theoretical and numerical analysis of the rock breaking process by impact hammer. Powder Technology, 448, p.120254.

[2] Mousavi, S., Wen, S., Lingsch, L., Herde, M., Raonić, B. and Mishra, S., 2025. RIGNO: A Graph-based framework for robust and accurate operator learning for PDEs on arbitrary domains. arXiv preprint arXiv:2501.19205.

[3] Ryan Luke Johns et al. ,A framework for robotic excavation and dry stone construction using on-site materials.Sci. Robot.8,eabp9758(2023).DOI:10.1126/scirobotics.abp9758

Work Packages

  • GNN training for 2D and 3D rock shapes
  • Online adaptation of rock parameters from observed fracture patterns
  • RL agent identifying the optimal impact point, and performing the fracturing

Requirements

  • Programming experience in python, and with common learning frameworks
  • Relevant knowledge in machine learning and deep learning
  • Experience with RL, GNNs, and/or JAX is a plus

Contact Details

Student(s) Name(s)

Project Report Abstract

More information

Open this project... 

Published since: 2025-06-02 , Earliest start: 2025-07-07

Organization Robotic Systems Lab

Hosts Spinelli Filippo

Topics Information, Computing and Communication Sciences , Engineering and Technology

Bridging the Gap: Enabling Soft Actor-Critic for High-Performance Legged Locomotion

ETH Competence Center - ETH AI Center

Proximal Policy Optimization (PPO) has become the de facto standard for training legged robots, thanks to its robustness and scalability in massively parallel simulation environments like IsaacLab. However, alternative algorithms such as Soft Actor-Critic (SAC), while sample-efficient and theoretically appealing due to entropy maximization, have not matched PPO’s empirical success in this domain. This project aims to close that performance gap by developing and evaluating modifications to SAC that improve its stability, scalability, and sim-to-real transferability on legged locomotion tasks. We benchmark SAC against PPO using standardized pipelines and deploy learned policies on real-world quadruped hardware, pushing toward more flexible and efficient reinforcement learning solutions for legged robotics.

Keywords

Legged locomotion, Soft Actor-Critic, Reinforcement learning, Sim-to-real transfer

Labels

Master Thesis

Description

Work packages

Understand why SAC underperforms in legged locomotion and establish baselines.

Make SAC work with massively parallel simulation frameworks.

Validate performance of improved SAC policies on real hardware.

Provide a rigorous, reproducible comparison between SAC and PPO.

Hardware validation encouraged

Requirements

Strong programming skills in Python

Experience in reinforcement learning frameworks, including SAC and PPO

Publication

This project will mostly focus on algorithm design and system integration. Promising results will be submitted to robotics or machine learning conferences.

Related literature

PPO

SAC

Goal

Contact Details

Please include your CV and transcript in the submission.

Chenhao Li

https://breadli428.github.io/

chenhli@ethz.ch

More information

Open this project... 

Published since: 2025-05-30

Organization ETH Competence Center - ETH AI Center

Hosts Li Chenhao , Li Chenhao , Li Chenhao , Li Chenhao

Topics Information, Computing and Communication Sciences

Development of Neuromuscular Biohybrid Robots

Soft Robotics Lab

Biohybrid robots integrate living cells and synthetic components to achieve motion. These systems often rely on engineered skeletal muscle tissues that contract upon electrical stimulation for actuation. Neuromuscular-powered biohybrid robots take this concept further by integrating motor neurons to induce muscle contractions, mimicking natural muscle actuation. In our lab, we are developing neuromuscular actuators using advanced 3D co-culture systems and biofabrication techniques to enable functional macro-scale biohybrid robots.

Keywords

Tissue engineering, mechanical engineering, biology, neuroengineering, biomaterials, biohybrid robotics, 3D in vitro models, biofabrication, bioprinting, volumetric printing.

Labels

Semester Project , Bachelor Thesis , Master Thesis , ETH Zurich (ETHZ)

Description

Our project focuses on overcoming current limitations in neuromuscular biohybrid robots, particularly scaffold design and the development of functional neuromuscular junctions (NMJs). You will apply expertise in biology, biomaterial synthesis, and biofabrication to tackle the key challenges of creating functional engineered tissues within the field of biohybrid robotics.

Goal

By incorporating moto-neurospheres into 3D co-culture systems with skeletal muscle tissues, we aim to establish global innervation and enable synchronized muscle contractions.

Contact Details

More information

Open this project... 

Published since: 2025-05-28 , Earliest start: 2025-06-02

Organization Soft Robotics Lab

Hosts Badolato Asia , Katzschmann Robert, Prof. Dr.

Topics Medical and Health Sciences , Engineering and Technology , Biology

Master Thesis - Signal Processing for Neurological Data

Rehabilitation Engineering Lab

We are offering a Masters thesis project for a motivated student to develop a complete signal processing pipeline tailored to neurological data, with the goal of detecting early biomarkers of cognitive or neurological conditions. This project blends neuroscience, signal processing, and artificial intelligence in a practical and high-impact context.

Keywords

signal processing, neurological data, fNIRS, EEG, neuroimaging, brain-computer interface, biomedical signal processing, artifact removal, noise reduction, ICA, wavelet denoising, feature extraction, FFT, PSD, ERP, hemodynamic response, connectivity analysis, machine learning, AI, classification, clustering, SVM, Random Forest, deep learning, PCA, LDA, anomaly detection, biomarkers, neuroscience, Python, MNE, scikit-learn, PyTorch, TensorFlow, Optohive, ETH Zurich, Relab

Labels

Master Thesis , ETH Zurich (ETHZ)

Description

At the Rehabilitation Engineering Laboratory (Relab) at ETH Zurich, we have developed an innovative functional near-infrared spectroscopy (fNIRS) technology in collaboration with the neurotechnology startup Optohive. Optohive is building a cutting-edge platform for real-time neurological signal acquisition and analysis. We are offering a Masters thesis project for a motivated student to develop a complete signal processing pipeline tailored to neurological data, with the goal of detecting early biomarkers of cognitive or neurological conditions. This project blends neuroscience, signal processing, and artificial intelligence in a practical and high-impact context.

Goal

  • A reproducible signal processing and machine learning pipeline (Python preferred)
  • Comprehensive research documentation and result visualizations
  • Integration of the pipeline into the Optohive platform (in collaboration with the development team)
  • Final written report and scientific presentation

Tasks

Feature Extraction - Time- and frequency-domain analysis (FFT, PSD) - Detection of task-evoked hemodynamic responses and connectivity patterns - Event-related potential (ERP) analysis - Extraction of statistical, entropy-based, and domain-specific features

AI & Machine Learning Integration - Classification and clustering using SVM, Random Forests, and deep learning models - Dimensionality reduction and feature selection (e.g., PCA, LDA) - Anomaly detection and discovery of temporal patterns in brain activity

Biomarker Discovery - Identifying features associated with specific cognitive or physiological states - Statistical validation and visualization of biomarkers - Benchmarking against existing approaches through literature review

Your Profile

  • Background in biomedical signal processing, neuroscience, biomedical engineering or physics
  • Strong programming skills in Python (NumPy, SciPy, MNE, scikit-learn, PyTorch/TensorFlow)
  • Experience with EEG, fNIRS, or related neuroimaging modalities
  • Interest in AI/ML applications in neuroscience and healthcare

Contact Details

For more information, please reach out to: marc.willhaus@hest.ethz.ch or dominik.wyser@hest.ethz.ch relab.ethz.ch | optohive.com

More information

Open this project... 

Published since: 2025-05-22 , Earliest start: 2025-08-01 , Latest end: 2026-06-01

Organization Rehabilitation Engineering Lab

Hosts Willhaus Marc

Topics Medical and Health Sciences , Engineering and Technology , Biology , Physics

Master Thesis - Deep Learning and AI Modelling of Neurological Data

Rehabilitation Engineering Lab

We are looking for a master student who codevelops AI and machine learning models and inference pipelines on the base of neurological fNIRS sensory data.

Keywords

deep learning, time-series, fNIRS, EEG, EMG, neurotechnology, neurological data, sequence modeling, CNN, LSTM, GRU, Transformer, hybrid models, self-supervised learning, contrastive learning, biomarker discovery, AI, machine learning, brain-computer interface, data augmentation, model interpretability, Grad-CAM, SHAP, saliency maps, biomedical signal processing, PyTorch, TensorFlow, Python, Optohive, ETH Zurich, Relab

Labels

Master Thesis , ETH Zurich (ETHZ)

Description

At the Rehabilitation Engineering Laboratory (Relab) at ETH Zurich, we are at the forefront of translating brain research into clinical applications. In collaboration with the neurotechnology startup Optohive, we have developed a novel functional near-infrared spectroscopy (fNIRS) system to record high-resolution neurological data. We are offering a Master’s thesis project focused on designing deep learning models for the analysis of brain time-series data. The objective is to uncover meaningful patterns and biomarkers from real-time neurophysiological recordings. You will work closely with our interdisciplinary team to shape the future of brain-computer interfaces through cutting-edge AI.

Goal

  • A reproducible deep learning pipeline (Python preferred, using PyTorch or TensorFlow)
  • Structured evaluation and comparative analysis of model architectures
  • A biomarker candidate report with physiological interpretations
  • Final thesis report and potential contribution to a scientific publication

Tasks

Data Handling & Preparation - Sequence shaping and time-windowing for temporal data - Data augmentation techniques specific to neurological signals - Task-aligned labeling with cognitive events or external triggers Model Development - Design and train time-series models (CNNs, LSTMs, GRUs, Transformers) - Explore hybrid architectures combining convolutional and recurrent layers - Incorporate multimodal brain imaging data - Investigate self-supervised and contrastive learning for limited-label datasets Model Evaluation - Evaluate model performance using metrics like accuracy, F1-score, AUC - Visualize model insights through techniques like Grad-CAM, SHAP, or saliency maps - Analyze latent representations and decision boundaries Biomarker Discovery & Integration - Identify signal features learned by models that correlate with neurological biomarkers - Validate findings against known physiological indicators - Integrate trained models into Optohive’s signal processing platform

Your Profile

  • Strong Python skills and experience with deep learning libraries (PyTorch/TensorFlow)
  • Solid foundation in machine learning, biomedical signal processing, or applied AI
  • Familiarity with neurophysiological data (EEG, fNIRS, EMG, etc.) or general time-series data
  • Interest in neurotechnology and the development of brain-computer interfaces

Contact Details

For more information, please reach out to: marc.willhaus@hest.ethz.ch or dominik.wyser@hest.ethz.ch relab.ethz.ch | optohive.com

More information

Open this project... 

Published since: 2025-05-22 , Earliest start: 2025-07-01 , Latest end: 2026-06-01

Organization Rehabilitation Engineering Lab

Hosts Willhaus Marc

Topics Medical and Health Sciences , Information, Computing and Communication Sciences , Engineering and Technology , Physics

Learning Terrain Traversal from Human Strategies for Agile Robotics

Computer Vision and Geometry Group

Teaching robots to walk on complex and challenging terrains, such as rocky paths, uneven ground, or cluttered environments, remains a fundamental challenge in robotics and autonomous navigation. Traditional approaches rely on handcrafted rules, terrain classification, or reinforcement learning, but they often struggle with generalization to real-world, unstructured environments.

Keywords

3D reconstruction, egocentric video, SMPL representation

Labels

Semester Project , Master Thesis

PLEASE LOG IN TO SEE DESCRIPTION

This project is set to limited visibility by its publisher. To see the project description you need to log in at SiROP. Please follow these instructions:

  • Click link "Open this project..." below.
  • Log in to SiROP using your university login or create an account to see the details.

If your affiliation is not created automatically, please follow these instructions: http://bit.ly/sirop-affiliate

More information

Open this project... 

Published since: 2025-05-21 , Earliest start: 2025-05-26

Organization Computer Vision and Geometry Group

Hosts Wang Xi , Frey Jonas , Patel Manthan , Kaufmann Manuel , Li Chenhao

Topics Information, Computing and Communication Sciences

HandoverNarrate: Language-Guided Task-Aware Motion Planning for Handovers with Legged Manipulators

Robotic Systems Lab

This project addresses the challenge of task-oriented human-robot handovers, where a robot must transfer objects in a manner that directly facilitates the human’s next action. In our prior work, we demonstrated that robots can present objects appropriately for immediate human use by leveraging large language models (LLMs) to reason about task context. However, integrating task-specific physical constraints—such as ensuring a full mug remains upright during transport—into the motion planning process remains unsolved. In this project, we aim to extend our existing motion planning framework for legged manipulators by incorporating such constraints. We propose using LLMs to dynamically generate task-aware constraint formulations based on high-level task descriptions and object states. These constraints will then be used to adjust the cost function of the model predictive controller in real time, enabling more context-sensitive and physically appropriate handovers.

Keywords

language-guided motion planning, legged robotics, human-robot collaboration

Labels

Semester Project , Bachelor Thesis

PLEASE LOG IN TO SEE DESCRIPTION

This project is set to limited visibility by its publisher. To see the project description you need to log in at SiROP. Please follow these instructions:

  • Click link "Open this project..." below.
  • Log in to SiROP using your university login or create an account to see the details.

If your affiliation is not created automatically, please follow these instructions: http://bit.ly/sirop-affiliate

More information

Open this project... 

Published since: 2025-05-21

Applications limited to ETH Zurich

Organization Robotic Systems Lab

Hosts Tulbure Andreea

Topics Information, Computing and Communication Sciences

Humanoid Locomotion in Rough Terrain via Imitation Learning

Robotic Systems Lab

TLDR: Make Humanoid walk in rough terrain using human demonstration and RL

Labels

Semester Project , Master Thesis

Description

Humanoid robot hardware is beginning to match human capabilities in terms of agility and dynamic movement. This progress enables the use of human demonstrations as references for learning human-aligned locomotion skills. At the same time, Meta`s Project Aria allows to cheaply estimating the human body poses at low cost while only wearing glasses. Existing approaches use this demonstration data to train reinforcement learning (RL) agents in simulation with a formulation that incentivizes mimicking the reference motion. These policies either mimic the human demonstration directly or use motion retargeting mechanisms to transfer the style onto an RL agent trained in simulation. While these approaches have shown impressive results on flat ground, they often lack adaptability to complex environments—such as jumping over obstacles, climbing, or performing parkour-style movements that require environment-conditioned responses. This project addresses that limitation by collecting an aligned dataset of human motion in complex terrain, including climbing and parkour. The data is captured using Meta Project Aria glasses, which provide an affordable way to estimate human body poses. In addition, a precise 3D scan of the environment is generated to create an accurate mesh that can be transferred to the simulator. This setup enables training RL agents with a higher level of environmental awareness and adaptability.

What We Offer: - Active support and mentorship, including help on potential publications - Access to high-performance compute clusters - Weekly meetings and career guidance

Work Packages

Work Packages: - Implementing imitation learning RL training pipeli- ne on rough terrain - - Support the collection of imitation data - - Support creating of the real-2-sim pipeline

Requirements

Who We're Looking For: - A highly motivated student or researcher - Experience with PyTorch or similar ML frameworks - (Optional) Background in computer vision

Contact Details

Please contact Chenhao Li, Jonas Frey and Xi Wang via Mail. chenhao.li@ai.ethz.ch (CC. jonfrey@ethz.ch, xi.wang@inf.ethz.ch)

  • Subject: Application: RL Learning Progress - Firstname Lastname
  • Content: Please provide 3 sentences why you would like to do the project within the mail.
  • Appendix: Bachelor transcript. Current Master transcript. CV. If you did not obtain your Bachelor`s at ETH please provide a relative grade ranking.

Student(s) Name(s)

Project Report Abstract

More information

Open this project... 

Published since: 2025-05-21 , Earliest start: 2025-05-31 , Latest end: 2025-09-30

Applications limited to ETH Zurich , EPFL - Ecole Polytechnique Fédérale de Lausanne , University of Zurich

Organization Robotic Systems Lab

Hosts Frey Jonas

Topics Information, Computing and Communication Sciences , Engineering and Technology , Behavioural and Cognitive Sciences

Exploring upper limb impairments using explainable AI on Virtual Peg Insertion Test data

Rehabilitation Engineering Lab

This thesis aims to apply explainable AI techniques to analyze time series data from the Virtual Peg Insertion Test (VPIT), uncovering additional metrics that describe upper limb impairments in neurological subjects, such as those with stroke, Parkinson's disease, and multiple sclerosis. By preserving the full dimensionality of the data, the project will identify new patterns and insights to aid in understanding motor dysfunctions and support rehabilitation.

Keywords

Machine learning, rehabilitation, neurology, upper limb, impairment, explainable AI, SHAP, novel technology, assessment, computer vision, artificial intelligence

Labels

Master Thesis

Description

Neurological disorders often result in upper limb impairments, which affect quality of life and increase dependency. The VPIT provides valuable data through robotic testing of 3D position and grip force during a reaching task. This thesis will use AI techniques to analyze this data, aiming to uncover novel metrics that provide insights into motor impairments and offer better diagnosis and rehabilitation approaches.

Goal

Explore time series data from the VPIT using explainable AI methods to identify and interpret new movement and force metrics that describe upper limb impairments in individuals with neurological disorders.

Tasks

  • Prepare and clean the VPIT time series data (position and grip force) for analysis, ensuring it is ready for machine learning modeling.
  • Apply machine learning models (e.g., LSTMs, CNNs, Random Forests) to the raw time series data and use explainable AI techniques (e.g., SHAP, LIME, attention mechanisms) to uncover important movement and force patterns that relate to upper limb impairments.
  • Analyze the results to identify key temporal features that correlate with neurological impairments, and provide insights into their potential role in diagnosis and rehabilitation.

Your Profile

  • Strong background in machine learning and time series analysis, with proficiency in Python.
  • Interest in applying AI techniques to clinical data, particularly in neurological research.
  • Motivated, independent, and creative problem-solving approach.
  • Strong communication skills in English.

Contact Details

If you are interested or have any further questions, please contact: Nadine Domnik (nadine.domnik@hest.ethz.ch). Please include your CV and transcript of records in your application.

More information

Open this project... 

Published since: 2025-05-20 , Earliest start: 2025-06-01

Organization Rehabilitation Engineering Lab

Hosts Domnik Nadine

Topics Medical and Health Sciences , Information, Computing and Communication Sciences , Engineering and Technology

Comparing the Virtual Peg Insertion Test (VPIT) with the haptic device Inverse3 for assessing upper limb function

Rehabilitation Engineering Lab

This thesis will compare the Virtual Peg Insertion Test (VPIT) with the Inverse3 haptic device by Haply to evaluate its effectiveness as a tool for assessing upper limb function. The focus will be on comparing both the hardware features and software capabilities to determine if the Inverse3 can serve as a valid alternative to VPIT for clinical assessments.

Keywords

Haptic device, virtual environment, rehabilitation, programming, health technology, assessment, software, hardware

Labels

Collaboration , Master Thesis

Description

The VPIT consists of a haptic device (Geomagic Touch) connected to a virtual environment, used to assess sensorimotor function of the upper limb, while the Inverse3 is a newer haptic device designed for 3D interaction and navigation. This project will compare the two devices by evaluating their hardware performance (precision, response times, and capabilities) and software APIs (ease of use, flexibility, and compatibility). Both devices will be tested to assess their ability to measure movement and grip force.

Goal

Evaluate the potential of the Inverse3 haptic device as a replacement or complement to the VPIT for upper limb assessments by comparing the hardware features, performance, and software capabilities.

Tasks

  • Compare the hardware features (precision, response time) and usability of the VPIT and Inverse3 devices during upper limb tasks.
  • Analyze the software API of both devices, focusing on ease of integration and flexibility in clinical settings.
  • Compare the movement and force data generated by both devices and assess their ability to measure relevant upper limb functions.
  • Determine whether the Inverse3 can be a valid alternative to VPIT for clinical use, providing recommendations for improvements or applications.

Your Profile

  • Familiar with hardware (testing and evaluation), preferably with haptic devices.
  • Proficiency in programming and working with software APIs (Python preferred).
  • Interest in rehabilitation technologies and clinical research.
  • Strong analytical and communication skills.

Contact Details

If you are interested or have any further questions, please contact: Nadine Domnik (nadine.domnik@hest.ethz.ch). Please include your CV and transcript of records in your application.

More information

Open this project... 

Published since: 2025-05-20 , Earliest start: 2025-06-01

Organization Rehabilitation Engineering Lab

Hosts Domnik Nadine

Topics Medical and Health Sciences , Information, Computing and Communication Sciences , Engineering and Technology

Embedded algorithms of IMUs in a neurorehabilitation device

Rehabilitation Engineering Lab

The goal of this project is to help develop embedded firmware for a imu based rehabilitation device. This project is part of the SmartVNS project which utilizes movement-gated control of vagus nerve stimulation for stroke rehabilitation.

Keywords

electrical engineering PCB Embedded systems neurorehabilitation

Labels

Semester Project , Master Thesis

PLEASE LOG IN TO SEE DESCRIPTION

This project is set to limited visibility by its publisher. To see the project description you need to log in at SiROP. Please follow these instructions:

  • Click link "Open this project..." below.
  • Log in to SiROP using your university login or create an account to see the details.

If your affiliation is not created automatically, please follow these instructions: http://bit.ly/sirop-affiliate

More information

Open this project... 

Published since: 2025-05-19 , Earliest start: 2024-01-06 , Latest end: 2024-12-31

Organization Rehabilitation Engineering Lab

Hosts Donegan Dane , Viskaitis Paulius

Topics Medical and Health Sciences , Engineering and Technology

Development and Testing of Electrical Systems for a SmartVNS Docking Station with Focus on Wireless Data Management

Rehabilitation Engineering Lab

We are looking for an enthusiastic electrical/firmware engineer to design and implement the electrical and firmware aspects of a docking station for the SmartVNS device. The station will charge the device components (pulse generator and wrist motion tracker) and pull data from the pulse generator and motion tracker, uploading it to an online server via Wi-Fi. This project will also involve testing the reliability of data transfer and power systems under real-world conditions, providing valuable insights into the practical application of this technology.

Keywords

Electrical, embedded, electronic, engineering, biomedical

Labels

Internship , Bachelor Thesis , Master Thesis

PLEASE LOG IN TO SEE DESCRIPTION

This project is set to limited visibility by its publisher. To see the project description you need to log in at SiROP. Please follow these instructions:

  • Click link "Open this project..." below.
  • Log in to SiROP using your university login or create an account to see the details.

If your affiliation is not created automatically, please follow these instructions: http://bit.ly/sirop-affiliate

More information

Open this project... 

Published since: 2025-05-19 , Earliest start: 2024-08-18 , Latest end: 2025-10-01

Organization Rehabilitation Engineering Lab

Hosts Viskaitis Paulius

Topics Information, Computing and Communication Sciences , Engineering and Technology

Development of Regulatory Documentation for a Novel Neurorehabilitation Device: Preparation for FDA and Swissmedic Compliance

Rehabilitation Engineering Lab

Stroke is a leading cause of long-term disability, affecting millions annually and necessitating innovative approaches to rehabilitation. The Rehabilitation Engineering Laboratory (RELab) at ETH Zurich is developing a novel closed-loop neurorehabilitation device that integrates real-time motion tracking with non-invasive brain stimulation to enhance neural plasticity and promote motor recovery in stroke patients. To advance this technology toward clinical trials, comprehensive regulatory documentation is essential to meet the stringent requirements of the U.S. Food and Drug Administration (FDA) and Swissmedic. This project focuses on preparing an Investigational Device Exemption (IDE) application for the FDA and supporting documentation for Swissmedic compliance, including technical descriptions, risk analyses, and clinical study protocols. The student will conduct literature reviews, draft regulatory documents, and support risk management in accordance with ISO 14971, contributing to the device’s regulatory pathway. This work offers a unique opportunity to gain expertise in medical device regulation, bridging biomedical engineering and neuroscience, and advancing a transformative solution for stroke rehabilitation.

Keywords

regulatory affairs, medical device, non-invasive brain stimulation, FDA, Swissmedic, investigational device exemption, IDE, stroke rehabilitation, compliance

Labels

Semester Project , Internship , Bachelor Thesis , Master Thesis

PLEASE LOG IN TO SEE DESCRIPTION

This project is set to limited visibility by its publisher. To see the project description you need to log in at SiROP. Please follow these instructions:

  • Click link "Open this project..." below.
  • Log in to SiROP using your university login or create an account to see the details.

If your affiliation is not created automatically, please follow these instructions: http://bit.ly/sirop-affiliate

More information

Open this project... 

Published since: 2025-05-19 , Earliest start: 2025-05-25 , Latest end: 2025-08-01

Organization Rehabilitation Engineering Lab

Hosts Donegan Dane , Viskaitis Paulius

Topics Medical and Health Sciences , Engineering and Technology

Global Optimization Enabled by Learning

Robotic Systems Lab

We aim to characterize optimization landscapes using metrics such as Sobolev norms, measuring function smoothness, Hessian spectral properties, indicating curvature, and the tightness of semidefinite programming (SDP) relaxations (relevant for polynomial optimization). The core innovation lies in translating these metrics into differentiable objectives or regularizers. By incorporating these into the training process, we encourage the learned modules to produce downstream optimization problems that are inherently well-conditioned and possess favourable global structures

Keywords

Optimization, Learning, Optimal, Robotics

Labels

Semester Project , Master Thesis , ETH Zurich (ETHZ)

PLEASE LOG IN TO SEE DESCRIPTION

This project is set to limited visibility by its publisher. To see the project description you need to log in at SiROP. Please follow these instructions:

  • Click link "Open this project..." below.
  • Log in to SiROP using your university login or create an account to see the details.

If your affiliation is not created automatically, please follow these instructions: http://bit.ly/sirop-affiliate

More information

Open this project... 

Published since: 2025-05-19 , Earliest start: 2025-06-01 , Latest end: 2026-06-01

Organization Robotic Systems Lab

Hosts Talbot William , Tuna Turcan

Topics Engineering and Technology

Strategic Financial Modelling and Business Plan Development for a Breakthrough Neurorehabilitation Device

Rehabilitation Engineering Lab

With over 14 million stroke cases annually, the global neurorehabilitation market presents a multi-billion-dollar opportunity for innovative solutions addressing motor recovery. The Rehabilitation Engineering Laboratory (RELab) at ETH Zurich is developing a revolutionary closed-loop neurorehabilitation device that leverages motion tracking and non-invasive brain stimulation to transform stroke rehabilitation. This project aims to develop a sophisticated financial model and a strategic business plan to propel the device to market leadership. The student will conduct market analysis, build financial projections, and craft a compelling business strategy, focusing on pricing, reimbursement, and investor engagement. By delivering investor-ready materials and a scalable commercialization plan, this work will position the device for rapid market entry and long-term success, offering the student a unique opportunity to blend business strategy, entrepreneurship, and healthcare innovation.

Keywords

financial modelling, business strategy, medical device, neurorehabilitation, startup, stroke rehabilitation, entrepreneurship, market entry, investment

Labels

Semester Project , Internship , Bachelor Thesis , Master Thesis

Description

Stroke impacts over 14 million people annually, creating a massive global demand for effective rehabilitation solutions and placing a significant burden on healthcare systems. The Rehabilitation Engineering Laboratory (RELab) at ETH Zurich is pioneering a game-changing closed-loop neurorehabilitation device that integrates cutting-edge motion tracking with non-invasive brain stimulation to accelerate motor recovery in stroke patients. As this revolutionary technology approaches the commercialization phase, a compelling financial model and a strategic business plan are essential to capture investor interest, secure funding, and dominate the growing neurorehabilitation market.

This project offers an unpaid internship or thesis opportunity for a driven student to spearhead the development of a robust financial model and a high-impact business plan for this innovative medical device. Working closely with a dynamic team of engineers, neuroscientists, and business strategists, the student will gain unparalleled experience in financial forecasting, market analysis, and startup strategy within the high-stakes medical device sector. This role is tailored for ambitious, business-minded individuals eager to blend entrepreneurial acumen with biomedical innovation to launch a transformative healthcare solution

Goal

The goal of this project is to craft a sophisticated financial model and a persuasive business plan to drive the successful commercialization of a breakthrough neurorehabilitation device. This involves projecting financial viability, defining a scalable business model, and developing a market entry strategy that positions the device as a leader in the stroke rehabilitation market. The project will focus on quantifying market opportunities, engaging key stakeholders (e.g., healthcare providers, insurers, investors), and building a roadmap to achieve rapid market penetration and sustainable growth.

Tasks

  • Perform in-depth market research to assess the global neurorehabilitation market, identifying growth trends, key competitors, and untapped opportunities in stroke rehabilitation.
  • Build a comprehensive financial model, including R&D and manufacturing cost projections, pricing strategies, revenue forecasts, and ROI scenarios to attract investors.
  • Develop a strategic business plan, encompassing executive summary, competitive analysis, go-to-market strategy, operational roadmap, and funding requirements.
  • Investigate reimbursement models and payer dynamics (e.g., private insurers, Medicare, European healthcare systems) to optimize pricing and market access.
  • Conduct a SWOT analysis to highlight the device’s competitive edge and address potential market risks.
  • Create investor-focused deliverables, such as pitch decks, financial summaries, and one-pagers, to secure funding from venture capitalists and strategic partners.
  • Ensure all financial and business planning materials are well-organized, professional, and ready for stakeholder presentations.
  • (For thesis students) Produce a thesis analyzing the financial and strategic framework for commercializing the device, with actionable recommendations for market leadership and scalability.

Your Profile

We are seeking a highly motivated BSc or MSc student with a background in Business Administration, Finance, Entrepreneurship, Economics, or a related field. Candidates with an interest in Biomedical Engineering or Health Sciences are also welcome if they have a strong business orientation. Ideal candidates should have: - Passion for entrepreneurship, medical device innovation, and disrupting the healthcare industry. - Experience or strong interest in financial modelling, business strategy, or startup development. - Proficiency in Excel for financial analysis; familiarity with Python, R, or business intelligence tools is a plus. - Understanding of the healthcare or medical device sector is advantageous but not required. - Sharp analytical skills and a knack for translating data into actionable business insights. - Exceptional communication skills in English to craft persuasive business documents and pitches; German is a plus for European market analysis. - A proactive, results-driven mindset and the ability to thrive in a fast-paced, interdisciplinary startup environment.

This project offers a rare chance to hone high-demand skills in financial strategy, startup planning, and medical device commercialization while shaping the future of healthcare.

Contact Details

To apply, please submit your application via the button below. Include a concise motivation statement outlining why you’re excited about this project and how your business skills will drive its success. Attach a mini CV highlighting your current program of study, grades, and any relevant experience or skills. Contact: Dr. Paulius Viskaitis: paulius.viskaitis@hest.ethz.ch Dr. Dane Donegan: dane.donegan@hest.ethz.ch

More information

Open this project... 

Published since: 2025-05-19 , Earliest start: 2025-05-25 , Latest end: 2025-09-01

Organization Rehabilitation Engineering Lab

Hosts Viskaitis Paulius

Topics Medical and Health Sciences , Engineering and Technology , Economics , Commerce, Management, Tourism and Services

Stanford – UC Berkeley Collaboration: Learning Progress Driven Reinforcement Learning for ANYmal

Robotic Systems Lab

TLDR: Improving navigation capabilities of ANYmal - RL is simulation - optimizing learning progress.

Labels

Semester Project , Master Thesis

Description

Project Description:

Navigation can be viewed similarly to next-word prediction in language models—a seemingly simple task that requires a deep understanding of the environment. In this project, we aim to advance reinforcement learning (RL) for navigation by investigating whether an agent can autonomously select increasingly challenging tasks that drive continuous improvement in its navigation behavior.

The core idea is to use learning progress as a signal for task difficulty. Tasks where the agent is improving are considered valuable, as they are neither too easy (where the agent already performs well) nor too hard (where little to no learning occurs). This naturally focuses learning on the "sweet spot" of challenge, avoiding both uninformative, trivial experiences and tasks beyond the agent’s current capability.

By adopting this approach, we can train more sophisticated navigation agents in procedurally generated environments with randomly assigned goals—without relying on hand-crafted heuristics to define a curriculum of terrain difficulty.

What We Offer:

  • Active support and mentorship, including help on potential publications
  • Collaboration with leading international institutions (Stanford, UC Berkeley)
  • Access to high-performance compute clusters
  • Weekly meetings and career guidance

Related Work:

  • Rudin, Nikita, David Hoeller, Philipp Reist, and Marco Hutter. "Learning to walk in minutes using massively parallel deep reinforcement learning." In Conference on Robot Learning, pp. 91-100. PMLR, 2022.
  • Lee, Joonho, Marko Bjelonic, Alexander Reske, Lorenz Wellhausen, Takahiro Miki, and Marco Hutter. "Learning robust autonomous navigation and locomotion for wheeled-legged robots." Science Robotics 9, no. 89 (2024): eadi9641.
  • Zhang, Chong, Jin Jin, Jonas Frey, Nikita Rudin, Matías Mattamala, Cesar Cadena, and Marco Hutter. "Resilient legged local navigation: Learning to traverse with compromised perception end-to-end." In 2024 IEEE International Conference on Robotics and Automation (ICRA), pp. 34-41. IEEE, 2024.
  • Wang, Xin, Yudong Chen, and Wenwu Zhu. "A survey on curriculum learning." IEEE transactions on pattern analysis and machine intelligence 44, no. 9 (2021): 4555-4576.
  • Portelas, Rémy, Cédric Colas, Katja Hofmann, and Pierre-Yves Oudeyer. "Teacher algorithms for curriculum learning of deep rl in continuously parameterized environments." In Conference on Robot Learning, pp. 835-853. PMLR, 2020.
  • Li, Chenhao, Elijah Stanger-Jones, Steve Heim, and Sangbae Kim. "Fld: Fourier latent dynamics for structured motion representation and learning." arXiv preprint arXiv:2402.13820 (2024).

Work Packages

Requirements

  • Highly motivated
  • Experience with PyTorch or similar ML frameworks
  • (Optional) Background in RL

Contact Details

Please contact Chenhao Li and Jonas Frey via Mail. chenhao.li@ai.ethz.ch (CC. jonfrey@ethz.ch)

  • Subject: Application: RL Learning Progress - Firstname Lastname

  • Content: Please provide 3 sentences why you would like to do the project within the mail.

  • Appendix: Bachelor transcript. Current Master transcript. CV. If you did not obtain your Bachelor`s at ETH please provide a relative grade ranking.

Student(s) Name(s)

Project Report Abstract

More information

Open this project... 

Published since: 2025-05-14 , Earliest start: 2025-05-14 , Latest end: 2025-08-31

Applications limited to EPFL - Ecole Polytechnique Fédérale de Lausanne , ETH Zurich , University of Zurich

Organization Robotic Systems Lab

Hosts Frey Jonas

Topics Information, Computing and Communication Sciences , Engineering and Technology , Behavioural and Cognitive Sciences

Fine-tuning Policies in the Real World with Reinforcement Learning

Robotics and Perception

Explore online fine-tuning in the real world of sub-optimal policies.

Keywords

online fine-tuning, reinforcement learning (RL), continual learning, drones, robotics

Labels

Semester Project , Master Thesis

Description

Training sub-optimal policies is relatively straightforward and provides a solid foundation for reinforcement learning (RL) agents. However, these policies cannot improve online in the real world, such as when racing drones with RL. Current methods fall short in enabling drones to adapt and optimize their performance during deployment. Imagine a drone equipped with an initial sub-optimal policy that can navigate a race course but not with maximum efficiency. As the drone races, it learns to optimize its maneuvers in real-time, becoming faster and more agile with each lap. Applicants are expected to be proficient in Python, C++, and Git.

Goal

This project aims to explore online fine-tuning in the real world of sub-optimal policies using RL, allowing racing drones to improve continuously through real-world interactions.

Contact Details

Interested candidates should send their CV, transcripts (bachelor and master), and descriptions of relevant projects to Ismail Geles [geles (at) ifi (dot) uzh (dot) ch], Elie Aljalbout [aljalbout (at) ifi (dot) uzh (dot) ch]

More information

Open this project... 

Published since: 2025-05-13 , Earliest start: 2025-06-01 , Latest end: 2026-04-30

Organization Robotics and Perception

Hosts Geles Ismail

Topics Information, Computing and Communication Sciences , Engineering and Technology

Inverse Reinforcement Learning from Expert Pilots

Robotics and Perception

Use Inverse Reinforcement Learning (IRL) to learn reward functions from previous expert drone demonstrations.

Keywords

Inverse Reinforcement Learning, Drones, Robotics

Labels

Semester Project , Master Thesis

Description

Drone racing demands split-second decisions and precise maneuvers. However, training drones for such races relies heavily on crafted reward functions. These methods require significant human effort in design choices and limit the flexibility of learned behaviors. Inverse Reinforcement Learning (IRL) offers a promising alternative. IRL allows an AI agent to learn a reward function by observing expert demonstrations. Imagine an AI agent analyzing recordings of champion drone pilots navigating challenging race courses. Through IRL, the agent can infer the implicit factors that contribute to success in drone racing, such as speed and agility. Applicants are expected to be proficient in Python, C++, and Git.

Goal

We want to explore the application of Inverse Reinforcement Learning (IRL) for training RL agents performing drone races or FPV freestyle to develop methods that extract valuable knowledge from the actions and implicit understanding of expert pilots. This knowledge will then be translated into a robust reward function suitable for autonomous drone flights.

Contact Details

Interested candidates should send their CV, transcripts (bachelor and master), and descriptions of relevant projects to: Ismail Geles [geles (at) ifi (dot) uzh (dot) ch], Leonard Bauersfeld [bauersfeld (at) ifi (dot) uzh (dot) ch]

More information

Open this project... 

Published since: 2025-05-13 , Earliest start: 2025-06-01 , Latest end: 2026-04-30

Organization Robotics and Perception

Hosts Geles Ismail

Topics Information, Computing and Communication Sciences , Engineering and Technology

Advancing Low-Latency Processing for Event-Based Neural Networks

Robotics and Perception

Design and implement efficient event-based networks to achieve low latency inference.

Keywords

Computer Vision, Event Cameras

Labels

Semester Project , Master Thesis

Description

Event cameras offer remarkable advantages, including ultra-high temporal resolution in the microsecond range, immunity to motion blur, and the ability to capture high-speed phenomena (https://youtu.be/AsRKQRWHbVs). These features make event cameras invaluable for applications like autonomous driving. However, efficiently processing the sparse event streams while maintaining low latency remains a difficult challenge. Previous research has focused on developing sparse update frameworks for event-based neural networks to reduce computational complexity, i.e., FLOPs. This project takes the next step by directly lowering the processing runtime to unlock the full potential of event cameras for real-time applications.

Goal

The focus of the project is to reduce runtime using common hardware (GPUs), which have been highly optimized for parallelization. The project will explore drastically new processing paradigms, which can potentially be transferred to standard frames. This ambitious project requires a strong sense of curiosity, self-motivation, and a principled approach to tackling research challenges. You should have solid Python programming skills and experience with at least one deep learning framework. If you’re excited about exploring cutting-edge techniques to push the boundaries, please feel free to contact us.

Key Requirement

  • Background in Deep Learning: Proficiency in Python and familiarity with state-of-the-art deep learning frameworks.

  • Problem-Solving Skills: Ability to approach research problems in a principled way.

Contact Details

Nico Messikommer [nmessi (at) ifi (dot) uzh (dot) ch], Nikola Zubic [zubic (at) ifi (dot) uzh (dot) ch], Prof. Davide Scaramuzza [sdavide (at) ifi (dot) uzh (dot) ch]

More information

Open this project... 

Published since: 2025-05-12 , Earliest start: 2024-12-12

Organization Robotics and Perception

Hosts Messikommer Nico

Topics Information, Computing and Communication Sciences

Multi-Critic Reinforcement Learning for Whole-Body Control of Bimanual Legged Manipulator

Robotic Systems Lab

Recent work in legged robotics shows the promise of unified control strategies for whole-body control. Portela et al. (2024) demonstrated force control without force sensors, enabling compliant manipulation through body coordination. In another study, they achieved accurate end-effector tracking using whole-body RL with terrain-aware sampling. Fu et al. (2023) showed that unified policies can dynamically handle both movement and manipulation in quadruped robots by training with two critics: one for arms, and one for legs, and then gradually combining them. In this project, you will investigate reinforcement learning for whole body control of a bimanual legged manipulator. You will implement a baseline single-critic whole body controller for the system. You will then investigate different multi-critic approaches and their effects on the training and final performance of the whole-body controller. References: Learning Force Control for Legged Manipulation, Portela et al., 2024 Whole-Body End-Effector Pose Tracking, Portela et al., 2024 Deep Whole-Body Control: Learning a Unified Policy for Manipulation and Locomotion, Fu et al., 2023

Labels

Semester Project , Master Thesis

PLEASE LOG IN TO SEE DESCRIPTION

This project is set to limited visibility by its publisher. To see the project description you need to log in at SiROP. Please follow these instructions:

  • Click link "Open this project..." below.
  • Log in to SiROP using your university login or create an account to see the details.

If your affiliation is not created automatically, please follow these instructions: http://bit.ly/sirop-affiliate

More information

Open this project... 

Published since: 2025-05-09 , Earliest start: 2025-05-11 , Latest end: 2025-12-31

Applications limited to ETH Zurich , [nothing]

Organization Robotic Systems Lab

Hosts Fischer Oliver , Elanjimattathil Aravind

Topics Engineering and Technology

Visual Language Models for Long-Term Planning

Robotic Systems Lab

This project uses Visual Language Models (VLMs) for high-level planning and supervision in construction tasks, enabling task prioritization, dynamic adaptation, and multi-robot collaboration for excavation and site management. prioritization, dynamic adaptation, and multi-robot collaboration for excavation and site management

Keywords

Visual Language Models, Long-term planning, Robotics

Labels

Semester Project , Master Thesis

Description

VLMs excel in reasoning and dynamic code generation, making them ideal for tasks like excavation sequencing, obstacle management, and multi-robot coordination. Applications include dynamic trenching, rock field clearing, and safety monitoring. The goal is to deploy VLM-based systems on autonomous excavators to enhance efficiency and adaptability.

Work Packages

Develop simulated and real scenarios for VLM-driven planning.

Integrate VLMs into excavation control systems for triggering tasks and code generation.

Benchmark performance in complex planning scenarios.

Requirements

Contact Details

Student(s) Name(s)

Project Report Abstract

More information

Open this project... 

Published since: 2025-05-07 , Earliest start: 2025-06-01 , Latest end: 2025-12-31

Organization Robotic Systems Lab

Hosts Terenzi Lorenzo

Topics Information, Computing and Communication Sciences

AI Agents for Excavation Planning

Robotic Systems Lab

Recent advancements in AI, particularly with models like Claude 3.7 Sonnet, have showcased enhanced reasoning capabilities. This project aims to harness such models for excavation planning tasks, drawing parallels from complex automation scenarios in games like Factorio. We will explore the potential of these AI agents to plan and optimize excavation processes, transitioning from simulated environments to real-world applications with our excavator robot.

Keywords

GPT, Large Language Models, Robotics, Deep Learning, Reinforcement Learning

Labels

Semester Project , Master Thesis

Description

The evolution of large language models (LLMs) has opened new avenues in automation and planning. Notably, Claude 3.7 Sonnet introduces hybrid reasoning, enabling both rapid responses and detailed, step-by-step problem-solving \cite{anthropic2025}. Such capabilities position these models as potential candidates for tasks requiring intricate planning, such as excavation.Possible final deployment in the real world with our excavator robot. Excavation planning is a challenging problem requiring spatial reasoning, decision-making under constraints, and long-horizon planning. Recent advances in AI have led to agents that can master complex games like Go and navigate automation-heavy environments. This project aims to determine whether these AI systems can efficiently plan excavation tasks and how they compare to reinforcement learning-based approaches.

Terra is a flexible, JAX-accelerated grid-world environment designed for training AI agents in earthworks planning. It allows for high-level motion and excavation planning, formulated as a reinforcement learning (RL) problem. Terra's multi-GPU capabilities enable rapid training, achieving intelligent excavation planning in minutes on high-end hardware.

First, we will test the zero-shot capabilities of state-of-the-art LLMs and agents in excavation planning. By providing structured prompts and game renderings, we will evaluate whether these models can reason effectively about excavation tasks.

Work Packages

\item Design a pipeline that enables modern AI agents and large language models (LLMs) to play the excavation planning game in \textbf{Terra}.

\item Evaluate whether models like Claude 3.7 Sonnet or GPT-4 can solve excavation tasks zero-shot or require fine-tuning.

\item Train AI models with reinforcement learning using Terra’s multi-GPU acceleration.

\item Deploy the trained models onto a real-world robotic excavator for autonomous excavation.

Requirements

  • general programming experience with python
  • experience training neural network
  • bonus: experience with large language models

Contact Details

Student(s) Name(s)

Project Report Abstract

More information

Open this project... 

Published since: 2025-05-07 , Earliest start: 2025-07-01 , Latest end: 2025-12-31

Organization Robotic Systems Lab

Hosts Terenzi Lorenzo

Topics Engineering and Technology

Transcatheter Heart Valve Repair and Replacement Devices at Harvard Medical School

Multiscale Robotics Lab

Master thesis on novel devices and tools for both valve repair and replacement at Harvard Medical School

Keywords

Prototyping, Experimental Evaluation, Materials

Labels

Master Thesis

Description

Transcatheter procedures avoid the trauma and risks of open-heart surgery by delivering devices that are intended to replicate surgical repair and replacement. We are creating novel devices and tools for both valve repair and replacement. These projects require innovative design and creative problem-solving skills along with expertise in prototyping and experimental evaluation.

This Master thesis is conducted at the Harvard Medical School, Boston.

Goal

Contact Details

Applicants should inquire by email with Prof. Pierre Dupont (Pierre.Dupont@childrens.harvard.edu) with a description of their qualifications, background, and availability.

More information

Open this project... 

Published since: 2025-05-02

Applications limited to ETH Zurich

Organization Multiscale Robotics Lab

Hosts Gantenbein Valentin

Topics Medical and Health Sciences , Engineering and Technology

Autonomous Robotic Cardiac Catheters at Harvard Medical School

Multiscale Robotics Lab

We are developing robotic catheters for heart valve repair and for treatment of arrythmias.

Keywords

Autonomous Control, Medical devices, Animal models, Prototyping

Labels

Master Thesis

Description

We are developing robotic catheters for heart valve repair and for treatment of arrythmias. Robotics offers the advantage of reducing the learning curve for complex beating-heart procedures and provides a platform for introducing automation. Important components of these projects can include: (1) developing and implementing autonomous control strategies, (2) integration of therapeutic devices, and (3) testing in anatomical and animal models.

This Master thesis is conducted at the Harvard Medical School, Boston.

Goal

Contact Details

Applicants should inquire by email with Prof. Pierre Dupont (Pierre.Dupont@childrens.harvard.edu) with a description of their qualifications, background, and availability.

More information

Open this project... 

Published since: 2025-05-02

Applications limited to ETH Zurich

Organization Multiscale Robotics Lab

Hosts Gantenbein Valentin

Topics Medical and Health Sciences , Engineering and Technology

Feedback Optimization of Acoustic Patterning in Real Time for Bioprinter

Acoustic Robotics for Life Sciences and Healthcare (ARSL)

Our project aims to enhance the ultrasound-assisted bioprinting process using real-time feedback and image processing. We have developed a transparent nozzle equipped with multiple cameras for real-time monitoring. The next steps involve integrating advanced image processing techniques, such as template matching, and implementing a feedback system to optimize the printing process. The system will be fully automated, featuring a function generator for wave creation and cooling elements. By analyzing the printing process and acoustic cell patterning with computer vision and leveraging real-time sensor feedback, we aim to dynamically optimize parameters such as frequency and amplitude for accurate and consistent pattern formation, crucial for bio applications.

Keywords

Machine learning, control and automation, 3D Printing, Ultrasound

Labels

Bachelor Thesis , Master Thesis

Description

Our project focuses on optimizing the ultrasound-assisted bioprinting process by leveraging real-time feedback and image processing. Ultrasound propagates through different materials, creating high and low-pressure nodes that can form various patterns within a printed structure. However, the process is challenged by ultrasound reflection and scattering.

To address these challenges, we have developed a transparent nozzle that allows real-time observation using multiple cameras. This setup enables the integration of advanced image processing techniques, such as template matching, to accurately print specific patterns. Additionally, the system is fully automated, with a function generator for wave creation and cooling elements.

The core of the project involves analyzing the printing process and acoustic cell patterning using computer vision. By incorporating real-time feedback from sensors, we can optimize parameters on the fly, ensuring precise and consistent pattern formation. This approach is tailored for bioapplications, where maintaining specific conditions is crucial for success

Goal

Utilize the developed transparent nozzle equipped with multiple cameras for real-time observation of the ultrasound-assisted bioprinting process.

Integrate advanced image processing techniques, such as template matching, to accurately print specific patterns.

Implement real-time feedback from sensors to dynamically optimize parameters, ensuring precise and consistent pattern formation.

Apply these optimized techniques to achieve reliable and efficient bioprinting for bio applications.

Experience with coding in Python is necessary, and knowledge of machine learning, control, 3D printing, and fluid dynamics is desirable.

Contact Details

Please send your CV and transcript of records to Prajwal Agrawal at pprajwal@ethz.ch, Mahmoud Medany at mmedany@ethz.ch, and Professor Daniel Ahmed at dahmed@ethz.ch.

More information

Open this project... 

Published since: 2025-04-29 , Earliest start: 2025-02-01 , Latest end: 2025-09-30

Organization Acoustic Robotics for Life Sciences and Healthcare (ARSL)

Hosts Medany Mahmoud

Topics Medical and Health Sciences , Information, Computing and Communication Sciences , Engineering and Technology , Behavioural and Cognitive Sciences

BEV meets Semantic traversability

Robotic Systems Lab

Enable Birds-Eye-View perception on autonomous mobile robots for human-like navigation.

Keywords

Semantic Traversability, Birds-Eye-View, Localization, SLAM, Object Detection

Labels

Master Thesis , ETH Zurich (ETHZ)

Description

Autonomous Driving made tremendous progress in recent years through innovations in learning-based methods [1]. An emergent enabler are Birds-Eye-View methods that allow vehicles to understand and reason about their surroundings in real time. In this project, we aim to transfer this research to autonomous mobile robots in real-world, human-inhabited environments. While rules of navigation and traversability are well defined for autonomous driving, one exciting aspect of this project is on finding analogous representations for everyday human environments. We would like to explore these methods on a range of robots including Spot, Anymal, the Ultra Mobility Vehicle and Humanoids.

References [1] Liao, B., Chen, S., Zhang, Y., Jiang, B., Zhang, Q., Liu, W., ... & Wang, X. (2024). Maptrv2: An end-to-end framework for online vectorized hd map construction. International Journal of Computer Vision, 1-23. [2] Kim, Y., Lee, J. H., Lee, C., Mun, J., Youm, D., Park, J., & Hwangbo, J. (2024). Learning Semantic Traversability with Egocentric Video and Automated Annotation Strategy. arXiv preprint arXiv:2406.02989. [3] https://rpl-cs-ucl.github.io/STEPP/

This project is hosted at The AI institute in collaboration with RSL.

Work Packages

  • Research latest BEV methods
  • Develop BEV-inspired methods for mobile robotic semantic traversability.
  • Deployment on real robots

Requirements

  • Excellent knowledge of C++, Python
  • Familiarity with learning framework, e.g. pytorch
  • Experience with ROS2 is a plus

Contact Details

Email Abel (agawel@theaiinstitute.com) and Laurent (lkneip@theaiinstitute.com). Please include your CV and up-​to-date transcript.

Student(s) Name(s)

Project Report Abstract

More information

Open this project... 

Published since: 2025-04-29 , Earliest start: 2025-01-15 , Latest end: 2025-10-31

Organization Robotic Systems Lab

Hosts Gawel Abel

Topics Information, Computing and Communication Sciences , Engineering and Technology

Scene graphs for robot navigation and reasoning

Robotic Systems Lab

Elevate semantic scene graphs to a new level and perform semantically-guided navigation and interaction with real robots at The AI Institute.

Keywords

Scene graphs, SLAM, Navigation, Spacial Reasoning, 3D reconstruction, Semantics

Labels

Master Thesis , ETH Zurich (ETHZ)

Description

Human environments often adhere to implicit and explicit semantic structures that are easily understood by humans. For Autonomous mobile robots to act in these environments, we aim to investigate how to represent this understanding of the environment. One technology that gained popularity in recent years are scene graphs. These allow robots to spatially deconstruct the world in a graph of multiple levels of abstraction, where nodes represent places, rooms, objects, etc and edges the relationships between them. In this project, we aim to elevate semantic scene graphs to a level to perform semantically-guided navigation and interaction. We would like to explore these methods on a range of robots including Spot, Anymal, the Ultra Mobility Vehicle and Humanoids.

References

[1] Hughes, N., Chang, Y., & Carlone, L. (2022). Hydra: A real-time spatial perception system for 3D scene graph construction and optimization. arXiv preprint arXiv:2201.13360. [2] Honerkamp, D., Büchner, M., Despinoy, F., Welschehold, T., & Valada, A. (2024). Language-grounded dynamic scene graphs for interactive object search with mobile manipulation. IEEE Robotics and Automation Letters. [3] Gu, Q., Kuwajerwala, A., Morin, S., Jatavallabhula, K. M., Sen, B., Agarwal, A., ... & Paull, L. (2024, May). Conceptgraphs: Open-vocabulary 3d scene graphs for perception and planning. In 2024 IEEE International Conference on Robotics and Automation (ICRA) (pp. 5021-5028). IEEE.

This thesis will be hosted at The AI Institute in collaboration with RSL.

Work Packages

  • Familiarization with latest scene graph frameworks
  • Build new scene graph representations that seamlessly integrate with robot navigation
  • Deployment on real robots

Requirements

  • Excellent knowledge of C++, Python
  • Familiarity with learning framework, e.g. pytorch
  • Experience with ROS2 is a plus

Contact Details

Email Abel (agawel@theaiinstitute.com) and Alex (aliniger@theaiinstitute.com). Please include your CV and up-​to-date transcript.

Student(s) Name(s)

Project Report Abstract

More information

Open this project... 

Published since: 2025-04-29 , Earliest start: 2025-01-15 , Latest end: 2025-10-31

Organization Robotic Systems Lab

Hosts Gawel Abel , Kneip Laurent

Topics Information, Computing and Communication Sciences , Engineering and Technology

Modelling and Optimizing the Power Budget of a Bridge-Mounted Camera System for River Waste Monitoring

Robotic Systems Lab

In this thesis, you will contribute to the Autonomous River Cleanup (ARC) by helping develop SARA, a bridge-mounted, camera-based system for monitoring river waste. Your focus will be on modeling the system’s power dynamics to determine the ideal battery and solar panel size, and balancing runtime throughout the day with overall the system size and weight. If time allows, you will also validate your findings with tests on the real hardware.

Keywords

system modelling, power electronics, simulations

Labels

Semester Project , Bachelor Thesis

Description

The Autonomous River Cleanup (ARC) is a student-led initiative supported by the Robotic Systems Lab with the goal of removing riverine waste. By joining ARC, you will contribute to ARC's most recent project, SARA. Project SARA is the next iteration of our bridge-mounted camera system to detect waste in rivers and measure pollution. What is new is that we will use a smartphone as the core component of the monitoring system. Today's smartphones are compact, relatively inexpensive, and equipped with a good camera and high computing power. One of the main challenges is the sizing of the external battery and solar panel, and determining the optimal runtime of the system throughout the day. Since it is impossible to run the system 24/7 without increasing the system's complexity, the goal is to find the most suitable time period for monitoring while keeping the system dimensions to a reasonable size.

Work Packages

During your time at ARC, you will do the following:

  • Model the power dynamics of the system based on the system’s power consumption and available solar radiation data

  • Determine the ideal battery capacity and solar panel size while ensuring a reasonable trade-off between the system’s mass and runtime for monitoring

  • If time allows, validate your simulation with experimentation on the real hardware

Requirements

Ideally, you already have the following skills or are eager to learn them:

  • Basic knowledge of modeling system dynamics and power electronics

  • Familiarity with programming (ideally Python) for simulating your model

  • Structured and methodical way of working

Contact Details

We look forward to hearing from you! For the application, please specify why you are interested in the project (motivational statement) and include your CV and Transcript of Records. Please reach us via our arc@ethz.ch.

Supervisors:

Student(s) Name(s)

Project Report Abstract

More information

Open this project... 

Published since: 2025-04-27 , Earliest start: 2025-05-05 , Latest end: 2025-09-30

Applications limited to ETH Zurich

Organization Robotic Systems Lab

Hosts Elbir Emre

Topics Engineering and Technology

Domain Adaptation Techniques for Vision Algorithms on a Smartphone for River Waste Monitoring

Robotic Systems Lab

In this thesis, you will work on SARA, a bridge-mounted, smartphone-based system for detecting and monitoring river waste. The focus will be on selecting lightweight detection and classification models suitable for smartphones and exploring domain adaptation techniques to improve performance across different locations with minimal retraining. Your work will build on previous research at ARC and current literature to develop solutions that balance model robustness and computational efficiency.

Keywords

machine learning, computer vision, domain adaptation techniques

Labels

Semester Project

Description

The Autonomous River Cleanup (ARC) is a student-led initiative supported by the Robotic Systems Lab with the goal of removing riverine waste. By joining ARC, you will contribute to ARC's most recent project, SARA. Project SARA is the next iteration of our bridge-mounted camera system to detect waste in rivers and measure pollution. What is new is that we will use a smartphone as the core component of the monitoring system. Today's smartphones are compact, relatively inexpensive, and equipped with a good camera and sufficient computing power. Usually, the baseline model works well on the location it has been trained on but loses performance once deployed at a new location. Therefore, we need to investigate methods to make our algorithms more robust to location changes and require only minimal retraining since obtaining new location data is tedious. Even though modern smartphones have high computational power, they are still limited compared to other hardware intended for machine learning applications. Thus, we have to rely on lightweight models.

Work Packages

During your time at ARC, you will do the following:

  • Familiarize yourself with previous work at ARC and in the literature on domain adaptation techniques

  • Determine suitable lightweight detection and classification models for a smartphone application

  • Analyze different domain adaptation techniques requiring minimal new data for retraining

Requirements

Ideally, you already have the following skills or are eager to learn them:

  • Familiarity with Python, ROS, and version control

  • Solid knowledge of computer vision and machine learning

  • Structured and methodical working style

Contact Details

We look forward to hearing from you! For the application, please specify why you are interested in the project (motivational statement) and include your CV and Transcript of Records. Please reach us via our arc@ethz.ch.

Supervisors:

Student(s) Name(s)

Project Report Abstract

More information

Open this project... 

Published since: 2025-04-27 , Earliest start: 2025-05-05 , Latest end: 2025-09-30

Applications limited to ETH Zurich

Organization Robotic Systems Lab

Hosts Elbir Emre

Topics Engineering and Technology

Optimal Robot Configuration for Autonomous Waste Sorting in Confined Spaces

Robotic Systems Lab

In this thesis, you will contribute to the Autonomous River Cleanup (ARC) by helping improve MARC, our robotic platform for autonomous waste sorting. Your work will focus on optimizing the robot arm configuration by simulating different base locations and degrees of freedom to achieve faster and more efficient pick-and-place movements in a confined space. You will build on our existing simulation environment to model and evaluate various setups.

Keywords

modelling and simulations, robotics, robot dynamics

Labels

Semester Project

Description

The Autonomous River Cleanup (ARC) is a student-led initiative supported by the Robotic Systems Lab with the goal of removing riverine waste. By joining ARC, you will contribute to ARC's current developments, where we aim to improve our Mobile Autonomous Recycling Container - MARC. MARC is our robotic sorting platform for autonomous waste sorting. It consists of two robot arms which pick up waste items from a moving conveyor belt. During previous work, we realized that the location of the robot base and the number of degrees of freedom (DoF) is not ideal in this confined space. Therefore, we want to find a suitable location for the robot’s base in order to better access the workspace and determine the optimal number of DoF for fast and efficient pick and place movements.

Work Packages

During your time at ARC, you will do the following:

  • Familiarize yourself with our current simulation environment to optimize our robot configuration

  • Model and simulate different robot arm types in confined spaces for fast pick and place movements

  • Determine the optimal number of degree of freedom and robot arm location

Requirements

Ideally, you already have the following skills or are eager to learn them:

  • Familiarity with Python, ROS, and version control

  • Basic knowledge about path planning algorithms and robot dynamics

  • Experience in working with an existing code framework

Contact Details

We look forward to hearing from you! For the application, please specify why you are interested in the project (motivational statement) and include your CV and Transcript of Records. Please reach us via our arc@ethz.ch.

Supervisors:

Student(s) Name(s)

Project Report Abstract

More information

Open this project... 

Published since: 2025-04-27 , Earliest start: 2025-05-05 , Latest end: 2025-09-30

Applications limited to ETH Zurich

Organization Robotic Systems Lab

Hosts Elbir Emre

Topics Engineering and Technology

Thermal Protection of a Bridge-Mounted Camera System for River Waste Monitoring

Robotic Systems Lab

The Autonomous River Cleanup (ARC) is developing SARA, the next iteration of a bridge-mounted, camera-based system to detect and measure riverine waste. Smartphones offer a compact, affordable, and powerful core for year-round monitoring but are vulnerable to shutdowns from extreme heat in summer and cold in winter. This thesis focuses on assessing these thermal challenges and designing protective solutions to ensure reliable, continuous operation.

Keywords

thermodynamics, heat transfer, testing

Labels

Semester Project , Bachelor Thesis

Description

The Autonomous River Cleanup (ARC) is a student-led initiative supported by the Robotic Systems Lab with the goal of removing riverine waste. By joining ARC, you will contribute to ARC's most recent project, SARA. Project SARA is the next iteration of our bridge-mounted camera system to detect waste in rivers and measure pollution. What is new is that we will use a smartphone as the core component of the monitoring system. Today's smartphones are compact, relatively inexpensive, and equipped with a good camera and high computing power. Since we plan to run it throughout the year, the system needs to be reasonably protected against environmental factors. In summer, we face the issue that the smartphone heats itself, and in winter, the smartphone gets too cold, which, in both cases, leads to a shutdown. This thesis aims to assess the thermal influence on the system and derive suitable protective solutions.

Work Packages

During your time at ARC, you will do the following:

  • Analyze thermal and environmental factors acting on the system throughout the year

  • Determine suitable solutions ensuring adequate heat transfer and environmental protection

  • Experimentally test your proposed solutions

Requirements

Ideally, you already have the following skills or are eager to learn them:

  • Basic knowledge of thermodynamics and fluid dynamics

  • Familiarity with Python and CAD software

  • Interested in working with hardware and physical testing

Contact Details

We look forward to hearing from you! For the application, please specify why you are interested in the project (motivational statement) and include your CV and Transcript of Records. Please reach us via our arc@ethz.ch.

Supervisors:

Student(s) Name(s)

Project Report Abstract

More information

Open this project... 

Published since: 2025-04-27 , Earliest start: 2025-05-05 , Latest end: 2025-09-30

Applications limited to ETH Zurich

Organization Robotic Systems Lab

Hosts Elbir Emre

Topics Engineering and Technology

Agile Flight of Flexible Drones in Confined Spaces

Robotics and Perception

The project aims to create a controller for an interesting and challenging type of quadrotor, where the rotors are connected via flexible joints.

Keywords

Robotics, Autonomous Systems, Model Predictive Control, Quadcopter, Drone Racing, Approximate Dynamic Programming, Model-Based Reinforcement Learning

Labels

Semester Project , Master Thesis

Description

This master's project (or thesis) examines the application of reinforcement learning and numerical optimal control to achieve high-performance, agile flight of a flexible quadrotor in confined environments. While high-fidelity models exist for many robotic platforms, their computational demands often limit their use in real-time control scenarios. This project aims to identify and utilize a model of a flexible quadrotor that strikes a balance between accuracy and efficiency.

The approach combines model predictive control with precise numerical integration over a short initial horizon and a simplified, lower-fidelity model for longer-term planning. This hybrid strategy enables long prediction horizons, which are crucial for executing agile maneuvers, such as flying through narrow gaps or navigating tight indoor spaces.

A reinforcement learning policy will be developed using the lab’s high-performance simulators to complement and enhance the control strategy. The project offers the opportunity to work at the intersection of learning, planning, and control, with a strong emphasis on deploying high-speed, intelligent robotics in challenging real-world scenarios. It suits students interested in advanced control, dynamics, reinforcement learning, and robotics.

Applicants should have proficiency in model predictive control, numerical optimization, and reinforcement learning, as well as experience in programming with Python and C++. Initial exposure to NMPC solvers such as acados is expected, as well as familiarity with simulation software and real-time data processing. A solid understanding of drone dynamics and control systems, combined with a background in signal processing and nonlinear dynamic systems.

Goal

This project can be taken as a student project or a master's thesis. Student projects can focus on reinforcement learning or model predictive control, while master's theses are required to compare both.

Contact Details

Interested candidates should send their CV, transcripts (bachelor and master), and descriptions of relevant projects to Rudolf Reiter (rreiter AT ifi DOT uzh DOT ch), Angel Romero (roagui AT ifi DOT uzh DOT ch), and Leonard Bauersfeld (bauersfeld AT ifi DOT uzh DOT ch).

More information

Open this project... 

Published since: 2025-04-17 , Earliest start: 2025-06-01 , Latest end: 2026-03-01

Organization Robotics and Perception

Hosts Reiter Rudolf

Topics Mathematical Sciences , Information, Computing and Communication Sciences , Engineering and Technology

Vision-Based World Models for Real-Time Robot Control

Robotics and Perception

This project aims to use vision-based world models as a basis for model-based reinforcement learning, aiming to achieve a generalizable approach for drone navigation.

Keywords

Robotics, Autonomous Systems, Computer Vision, Foundation Models, Reinforcement Learning

Labels

Semester Project , Master Thesis

Description

This master's project focuses on enabling real-time, vision-based control for quadrotors by distilling large, complex world models into lightweight versions suitable for deployment on resource-constrained platforms. The goal is to achieve fast, efficient inference from camera inputs, supporting agile indoor navigation in previously unseen environments.

Large-scale vision models capable of generating and understanding complex scenes are typically too computationally intensive for onboard use. This project addresses that challenge by applying model distillation techniques to transfer knowledge from a pre-trained, high-capacity model to a smaller, faster one. The distilled models will be deployed on quadrotors to evaluate real-world performance, focusing on latency, energy consumption, and navigation success. Beyond standard RGB input, the project will also investigate using additional visual modalities like depth and semantic segmentation to enhance control capabilities.

The work will follow a structured timeline, starting with a literature review and dataset setup, moving through distillation and model optimization, and ending with deployment and testing. This project is an excellent fit for students interested in robotics, computer vision, and efficient deep learning, and it offers the chance to contribute to the future of responsive, autonomous robotic systems.

Applicant Requirements:

  • Proficiency in reinforcement learning, robotics, and computer vision

  • Strong programming skills with Python

  • First experience with large neural network world models

  • Knowledge of simulation software and real-time data processing

  • Understanding of drone dynamics and control systems

Goal

Investigate model distillation techniques and their application to vision-based world models for deployment on navigation tasks.

Contact Details

Interested candidates should send their CV, transcripts (bachelor and master), and descriptions of relevant projects to Rudolf Reiter (rreiter AT ifi DOT uzh DOT ch) and Daniel Zhai (dzhai (at) ifi (dot) uzh (dot) ch).

More information

Open this project... 

Published since: 2025-04-17 , Earliest start: 2025-05-01 , Latest end: 2026-02-28

Organization Robotics and Perception

Hosts Reiter Rudolf

Topics Information, Computing and Communication Sciences

Vision-Based Reinforcement Learning in the Real World

Robotics and Perception

We aim to learn vision-based policies in the real world using state-of-the-art model-based reinforcement learning.

Keywords

Robotics, Autonomous Systems, Computer Vision, Reinforcement Learning

Labels

Semester Project , Master Thesis

Description

This master's project offers an exciting opportunity to work on real-world vision-based drone flight without relying on simulators. The goal is to develop learning algorithms that enable quadrotors to fly autonomously using visual input, learned directly from real-world experience. By avoiding simulation, this approach opens up new possibilities for the future of robotics.

A significant focus of the project is achieving high sample efficiency and designing a robust safety framework that enables effective exploration by leveraging the latest research results. The project will begin with state-based learning as an intermediate step, progressing toward complete vision-based learning. It builds on recent research advances and a well-established drone navigation and control software stack. The lab provides access to multiple vision-capable quadrotors ready for immediate use.

This project is ideal for motivated master’s students interested in robotics, learning systems, and real-world deployment. It offers a rare chance to contribute to a high-impact area at the intersection of machine learning, control, and computer vision, with strong potential for further academic or industrial opportunities.

Applicants should have proficiency in computer vision, reinforcement learning, and robotics, as well as strong programming skills in Python and C++. Initial experience with large neural network world models is expected, as well as familiarity with simulation software and real-time data processing. A solid understanding of drone dynamics and control systems is also essential.

Goal

The goal is to investigate how the latest reinforcement learning advances push the limits of learning real-world tasks such as agile vision-based flight.

Contact Details

Interested candidates should send their CV, transcripts (bachelor and master), and descriptions of relevant projects to Rudolf Reiter (rreiter AT ifi DOT uzh DOT ch) and Angel Romero (roagui AT ifi DOT uzh DOT ch)

More information

Open this project... 

Published since: 2025-04-17 , Earliest start: 2025-05-01 , Latest end: 2026-02-01

Organization Robotics and Perception

Hosts Reiter Rudolf

Topics Information, Computing and Communication Sciences , Engineering and Technology

Meta-model-based-RL for adaptive flight control

Robotics and Perception

This research project aims to develop and evaluate a meta model-based reinforcement learning (RL) framework for addressing variable dynamics in flight control.

Keywords

Model-based Reinforcement Learning, Meta Learning, Drones, Robotics

Labels

Master Thesis

Description

Drone dynamics can change significantly during flight due to variations in load, battery levels, and environmental factors such as wind conditions. These dynamic changes can adversely affect the drone's performance and stability, making it crucial to develop adaptive control strategies. This research project aims to develop and evaluate a meta model-based reinforcement learning (RL) framework to address these variable dynamics. By integrating dynamic models that account for these variations and employing meta-learning techniques, the proposed method seeks to enhance the adaptability and performance of drones in dynamic environments. The project will involve learning dynamic models for the drone, implementing a meta model-based RL framework, and evaluating its performance in both simulated and real-world scenarios, aiming for improved stability, efficiency, and task performance compared to existing RL approaches and traditional control methods. Successful completion of this project will contribute to the advancement of autonomous drone technology, offering robust and efficient solutions for various applications. Applicants are expected to be proficient in Python, C++, and Git.

Goal

Develop methods for meta (model-based) RL to handle variable drone dynamics.

Contact Details

Interested candidates should send their CV, transcripts (bachelor and master), and descriptions of relevant projects to: Ismail Geles [geles (at) ifi (dot) uzh (dot) ch], Elie Aljalbout [aljalbout (at) ifi (dot) uzh (dot) ch], Angel Romero [roagui (at) ifi (dot) uzh (dot) ch]

More information

Open this project... 

Published since: 2025-04-10 , Earliest start: 2025-05-01 , Latest end: 2026-04-30

Organization Robotics and Perception

Hosts Geles Ismail

Topics Information, Computing and Communication Sciences , Engineering and Technology

Hardware Design Internship in Brain Imaging

Rehabilitation Engineering Lab

Join us in revolutionizing brain imaging technologies and make it accessible for everyday use. Functional near-infrared spectroscopy (fNIRS) is an emerging technology that enables cost-effective and precise brain measurements, helping to improve neurotherapies and brain health.

Keywords

3D-printing, injection molding, design, brain imaging, neuro, wearables, health, startup

Labels

Internship

Description

Optohive is developing a wearable brain imaging system that provides affordable and accessible insights into brain function, offering a quantitative approach to diagnosing and treating neurological disorders.

Our mission is to advance understanding of the human brain and translate this knowledge into innovative diagnostic and therapeutic solutions, addressing the growing prevalence of neurological conditions.

We are currently seeking a Mechanical Engineer to help drive the continued development and refinement of our hardware.

Goal

In this internship, you will advance our innovative system by working on industrialization and new developments. This position comes with a strong commercial focus, aligning your technical excellence with market needs.

Tasks

You will join our interdisciplinary team, taking on key mechanical engineering responsibilities to further develop and refine our optohive hardware: - Design and develop innovative wearables for Optohive. - Prepare components for production, including injection molding processes. - Enhance usability and aesthetics for an improved user experience. - Develop and automate manufacturing workflows for scalability. - Conduct CAD simulations to validate designs and performance. - Set up product requirements.

Your Profile

  • MSc. or BSc. in Mechanical Engineering.
  • Prior experience in CAD-Design, ideally Solidworks.
  • Interest in Working in a dynamic Startup-environment.

Contact Details

More information

Open this project... 

Published since: 2025-04-09 , Earliest start: 2025-04-10 , Latest end: 2025-06-26

Organization Rehabilitation Engineering Lab

Hosts Wyser Dominik

Topics Medical and Health Sciences , Information, Computing and Communication Sciences , Engineering and Technology

Smart Microcapsules for Biomedical Advances

Multiscale Robotics Lab

This Master's thesis/semester project focuses on the microfluidic fabrication of microcapsules with multi-environmental responsiveness. The aim is to develop microcapsule-based microrobots capable of adapting to various environmental cues. We envision that these microrobots will be used for complex tasks in biomedical applications.

Keywords

Microfluidics, Microcapsules, Microrobotics, Responsive Polymers, Biomedical Engineering

Labels

Semester Project , Internship , Master Thesis , Student Assistant / HiWi , ETH Zurich (ETHZ)

Description

Imagine tiny robots, barely visible to the eye, that can navigate complex environments and perform intricate tasks—all without the bulky brains that bigger robots need! Unlike their larger counterparts, micro- and nanomachines can't pack in heavy computational gear. Instead, they rely on their ingenious designs and smart materials to sense, control, and adapt to their surroundings. In this exciting project, we’re pioneering a microfluidic approach to craft intelligent microrobots with several responsive abilities. The breakthroughs from this research will not only answer key questions in robotics but also propel the use of intelligent micromachines in high-impact areas like sophisticated biomedical devices. Dive into this project with us, and be at the forefront of developing the smart, tiny robots of tomorrow!

The following experience or skills would be ideal but not necessary:

  • Knowledge in biomedical engineering.

  • Prior experience in chemistry lab.

  • Experience or knowledge in microfluidic devices.

References

M. Hu et al. "Self‐Reporting Multiple Microscopic Stresses Through Tunable Microcapsule Arrays." Adv. Mater. 37.3 (2025): 2410945

B. J. Nelson & S. Pané “Delivering drugs with microrobots.” Science 382.6675 (2023): 1120-1122.

Goal

  • Manipulation of droplet-generation microfluidic systems. (~ 1 month)

  • Develop microfabrication process to produce microcapsules from different responsive polymers. (~ 3 months)

  • Investigate and test the fabricated intelligent microrobots under different environmental cues. (~ 2 months)

Contact Details

Curious? Please contact minghu@ethz.ch (Dr. Minghan Hu, SNSF Ambizione group leader).

More information

Open this project... 

Published since: 2025-04-08 , Earliest start: 2025-07-01

Organization Multiscale Robotics Lab

Hosts Hu Minghan

Topics Medical and Health Sciences , Engineering and Technology , Chemistry

Low-Voltage Soft Actuators for Developing Untethered Robotic Systems

Soft Robotics Lab

We are building the next generation of HALVE (Hydraulically Amplified Low-Voltage Electrostatic) actuators which are flexible, pouch-based electrostatic actuators that operate at voltages 5–10× lower than traditional soft electrostatic systems. You will help us explore novel actuator geometries, ultra-thin functional layers, and new fabrication techniques to unlock scalable, energy-efficient soft robotic systems.

Keywords

soft robotics, low-voltage actuation, dielectric elastomers, electrostatic actuators, fabrication, PVDF-TrFE-CTFE, vapor deposition, CNC sealing, mechatronics, materials science

Labels

Semester Project , Bachelor Thesis , Master Thesis

Description

HALVE actuators are soft actuators powered by low voltages, designed for safe and scalable robotic applications. They use dielectric polymers and ultra-thin electrodes fabricated through vapor deposition and blade coating, sealed with a customized CNC technology.

We offer a wide range of topics for thesis and semester projects. Based on your interests, you can focus on material development, actuator design, experimental testing, or application-oriented integration. Projects will be highly hands-on, with access to cleanroom tools, mechanical and electrical testing equipment, and high-voltage electronics.

By the end of your project, you will have developed and tested your own HALVE actuator prototypes—and potentially helped define a new standard in soft actuation.

Goal

  • Literature review on soft electrostatic actuators and materials
  • Design of actuator geometry and electrode/dielectric layer stack
  • Fabrication of thin functional films
  • Experimental testing (force vs. strain, durability, etc.)
  • Performance benchmarking and design iteration
  • Application demo with integrated soft robotic system

More information

You ideally have:

  • Strong interest in soft robotics, materials, or actuator systems

  • Ability to work independently and collaboratively

  • Experience in prototyping, fabrication, or electronics is a plus

  • Familiarity with CAD, Python, or basic circuit design

  • Knowledge of polymer materials and/or electrostatics

  • Open to students from Mechanical, Electrical, Materials Engineering, or related fields

More information

Open this project... 

Published since: 2025-04-07 , Earliest start: 2025-04-14 , Latest end: 2026-01-31

Applications limited to ETH Zurich , EPFL - Ecole Polytechnique Fédérale de Lausanne , Empa

Organization Soft Robotics Lab

Hosts Albayrak Deniz , Hinchet Ronan

Topics Engineering and Technology

Combined Muscle and Nerve Tissue Engineering

Soft Robotics Lab

Engineered muscle tissues have applications in regenerative medicine, drug testing, and understanding motion. A key challenge is restoring neuromuscular communication, especially in treating volumetric muscle loss (VML). This project aims to create functional neuromuscular constructs with biomimetic innervation. Scaffolds will be made using electrospun fibers, conductive materials, and drug-loaded graphene. Muscle and nerve cells derived from iPSCs will be seeded into these scaffolds. Constructs will be tested for motion, drug response, and integration in bio-hybrid robotic systems. The platform will advance muscle-nerve regeneration, drug testing, and bio-hybrid robotics.

Keywords

Tissue engineering, innervation, neural tissue, nerve, muscle tissue, scaffold, iPSCs, muscle cells, bioprinting, biofabrication, biohybrid robotics, soft robotics, 3D printing, biomaterials, electrical stimulation, actuation.

Labels

Semester Project , Master Thesis

Description

The project aims to create neuromuscular constructs with biomimetic innervation. It seeks to enhance tissue regeneration using localized delivery of pro-regenerative molecules. Another goal is to demonstrate the constructs' use in drug testing and bio-hybrid robotics. Scaffolds will be fabricated using electrospun fibers made from polycaprolactone, gelatin, and polypyrrole (Figure 1). Carboxyl-functionalized graphene will enable drug release; ferulic acid will promote nerve regeneration. Aligned fiber sheets will be formed and assembled into blocks or conduits. These will be seeded with iPSC-derived muscle cells and motor neurons. A nerve network will be formed by inserting conduits into muscle blocks. Constructs will be tested for actuation response under electrical stimulation and drug modulation. Multi-unit configurations will be built and analyzed to demonstrate robotic neural control.

Work Packages - Literature review on muscle-based soft actuators - Design of scaffolds for neurotized muscle tissue - 3D biofabrication - Test regenerative responses to the drug principle - Characterization, actuation, and control of the realized tissue

Requirements - High motivation and problem-solving ability - Knowledge of cell culture, biofabrication, tissue engineering, and/or fluorescence microscopy - Must-have: capable of working with a high degree of independence

Goal

This project aims at (i) generating functional neuromuscular constructs with biomimetic innervation design; (ii) proving enhanced tissue regeneration (via localized delivery of pro-regenerative molecules); (iii) demonstrating suitability for functional studies (drug testing and bio-hybrid robotics).

Contact Details

Dr. Miriam Filippi, mfilippi@ethz.ch, Soft Robotics Lab, Institute of Robotics and Intelligent Systems, D-MAVT Prof. Robert Katzschmann, rkk@ethz.ch, Soft Robotics Lab, Institute of Robotics and Intelligent Systems, D-MAVT

More information

Open this project... 

Published since: 2025-04-06 , Earliest start: 2025-04-06 , Latest end: 2025-09-30

Organization Soft Robotics Lab

Hosts Filippi Miriam

Topics Medical and Health Sciences , Engineering and Technology , Biology

Develop Dexterous Humanoid Robotic Hands

Soft Robotics Lab

Design and build dexterous human-like robotic hands with us at the Soft Robotics Lab and the ETH spin-off mimic. We will explore different possibilities of developing design features and sub-systems. The developed features shall be integrated into a fully functional robotic hand and applied to solve practical manipulation challenges.

Keywords

humanoid, robotics, hand, dexterity, soft robotics, actuation, prototyping, modeling and control, mechatronics, biomimetic, design, 3D printing, silicone casting, electronics, machine learning, control

Labels

Semester Project , Master Thesis

Description

We build humanoid robotic hands that are dexterous, robust and easy to fabricate. This universal gripper could enable the automation of human-like tasks that are too complex for conventional grippers.

We offer a variety of topics for master’s/bachelor’s thesis or semester projects for students in mechanical engineering, electrical engineering and robotics. Together we will define a focus on specific features for you to independently explore, develop, design and build.

By the end of your project you should have put together a fully functional robotic hand implementing your novel features and systematically analyze its performance experimentally.

Work Packages

  • Literature review of existing work
  • Modeling and calculation of concepts, structures, mechanisms, etc.
  • CAD design, PCB design, prototype fabrication and testing
  • System integration, modeling, control
  • Improve and iterate on hand design features
  • Practical experiments solving various manipulation challenges with the robotic hand

Requirements

  • Enrolled Bachelor’s or Master’s student in Robotics, Mechatronics, Electrical, Mechanical Engineering or a related engineering discipline.
  • Proficiency in at least several of the following technical topics: C++, ROS, ROS2, PCB design, actuation & controls, sensing, CAD design, FEM simulations, 3D printing, silicone casting
  • First-hand experience with prototyping, actuators and sensors
  • Excellent academic track record
  • High motivation and problem-solving ability
  • Capable of both working independently and collaborating in a team
  • Fluent English speaker

Contact Details

Student(s) Name(s)

Project Report Abstract

More information

Open this project... 

Published since: 2025-03-24 , Earliest start: 2025-01-01 , Latest end: 2026-03-31

Organization Soft Robotics Lab

Hosts Weirich Stefan

Topics Engineering and Technology

Advancing Space Navigation and Landing with Event-Based Camera in collaboration with the European Space Agency

Robotics and Perception

In this project, you will investigate the use of event-based cameras for vision-based landing on celestial bodies such as Mars or the Moon.

Keywords

event-based camera, vision-based navigation

Labels

Master Thesis

Description

Event-based cameras offer significant benefits in difficult robotic scenarios characterized by high-dynamic range and rapid motion. These are precisely the challenges faced by spacecraft during landings on celestial bodies like Mars or the Moon, where sudden light changes, fast dynamics relative to the surface, and the need for quick reaction times can overwhelm vision-based navigation systems relying on standard cameras. In this work, we aim to design novel spacecraft navigation methods for the descent and landing phases, exploiting the power efficiency and sparsity of event cameras. Particular effort will be dedicated to developing a lightweight frontend, utilizing asynchronous convolutional and graph neural networks to effectively harness the sparsity of event data, ensuring efficient and reliable processing during these critical phases. The project is in collaboration with European Space Agency at the European Space Research and Technology Centre (ESTEC) in Noordwijk (NL).

Goal

Investigate the use of asynchronous neural networks (either regular or spiking) for building an efficient frontend system capable of processing event-based data in real-time. Experiments will be conducted both pre-recorded dataset as well as on data collected during the project. We look for students with strong programming (Pyhton/Matlab) and computer vision backgrounds. Additionally, knowledge in machine learning frameworks (pytorch, tensorflow) is required.

Contact Details

Interested candidates should send their CV, transcripts (bachelor and master) to: Roberto Pellerito (rpellerito@ifi.uzh.ch), Marco Cannici (cannici@ifi.uzh.ch) and Davide Scaramuzza (sdavide@ifi.uzh.ch).

More information

Open this project... 

Published since: 2025-03-19 , Earliest start: 2025-03-23 , Latest end: 2025-12-31

Applications limited to University of Zurich , ETH Zurich

Organization Robotics and Perception

Hosts Cannici Marco , Pellerito Roberto

Topics Engineering and Technology

Time-continuous Facial Motion Capture Using Event Cameras

Robotics and Perception

Traditional facial motion capture systems, including marker-based methods and multi-camera rigs, often struggle to capture fine details such as micro-expressions and subtle wrinkles. While learning-based techniques using monocular RGB images have improved tracking fidelity, their temporal resolution remains limited by conventional camera frame rates. Event-based cameras present a compelling solution, offering superior temporal resolution without the cost and complexity of high-speed RGB cameras. This project explores the potential of event-based cameras to enhance facial motion tracking, enabling the precise capture of subtle facial dynamics over time.

Labels

Semester Project , Bachelor Thesis , Master Thesis

Description

Traditional facial motion capture systems often rely on marker-based methods or multi-camera rigs to track facial movements. However, these approaches can be limited in capturing fine details such as subtle wrinkles and micro-expressions. Recent advancements in learning-based techniques have enabled high-fidelity facial tracking using monocular RGB images, but the temporal resolution is constrained by the frame rate of conventional cameras. Event-based cameras offer a promising alternative, providing superior temporal resolution without the need for costly and bulky high-speed RGB cameras. This project aims to leverage the advantages of event-based cameras to achieve unprecedented quality in tracking subtle facial movements.

Goal

Develop a facial motion capture system that utilizes event-based cameras to accurately track fine facial movements, including micro-expressions and subtle wrinkles. The system should overcome the limitations of traditional methods by providing higher temporal resolution and capturing intricate facial details.

Contact Details

Interested candidates should send their CV, transcripts (bachelor and master) to: Roberto Pellerito (rpellerito@ifi.uzh.ch), Nico Messikommer (nmessi@ifi.uzh.ch) and Davide Scaramuzza (sdavide@ifi.uzh.ch).

More information

Open this project... 

Published since: 2025-03-19 , Earliest start: 2025-03-23 , Latest end: 2025-12-31

Organization Robotics and Perception

Hosts Pellerito Roberto

Topics Engineering and Technology

Intelligent Micromachines Made from Droplet-Based Factory

Multiscale Robotics Lab

We invite applications for a Master's thesis / semester project that focuses on the fabrication of microrobots with custom shapes. Using our developed droplet printing technique, this project will explore how different microrobot shapes, created by different magnetic fields and materials, influence their control behaviors in blood vessels. This research aims to advance biomedical technologies, particularly in targeted drug delivery and minimally invasive procedures.

Keywords

Microrobotics, 4D Printing, Soft Materials, Biomedical Devices

Labels

Semester Project , Master Thesis , Student Assistant / HiWi , ETH Zurich (ETHZ)

Description

Project background

In recent years, the field of microrobotics has garnered significant attention, particularly for its potential applications in biomedical engineering, such as targeted drug delivery, minimally invasive surgery, and precise medical diagnostics. Traditional microrobot fabrication techniques predominantly rely on top-down methods, such as 3D printing and lithography. While effective, these methods often involve complex, time-consuming processes and face limitations in achieving high precision at the microscale.

Project details

Our approach diverges from these conventional methods by employing a bottom-up fabrication technique, leveraging the principles of self-assembly and droplet manipulation. Specifically, we focus on the innovative use of ferrofluid droplets and magnetic fields to sculpt microrobots with customized shapes. This method allows for greater flexibility and precision in designing microrobots, enabling the creation of complex geometries that would be challenging to achieve with top-down techniques.

The following experience or skills would be ideal but not necessary:

  • Know-how in nanoparticles synthesis & self-assembly.

  • Prior experience in chemistry lab.

  • Prior experience or knowledge in magnetic control systems.

References

M. Hu et al. "Shaping the assembly of superparamagnetic nanoparticles." ACS Nano 13.3 (2019): 3015-3022.

B. J. Nelson & S. Pané “Delivering drugs with microrobots.” Science 382.6675 (2023): 1120-1122.

Goal

  • Build up a droplet printing fabrication platform towards microrobots fabrications. (~ 1 month)

  • Optimize the fabrication process to produce microrobots with tailored structures. (~ 3 months)

  • Investigate how different microrobot shapes influence their movement under physiological conditions. (~ 2 months)

Contact Details

Please contact minghu@ethz.ch (Dr. Minghan Hu, SNSF Ambizione group leader).

More information

Open this project... 

Published since: 2025-03-18 , Earliest start: 2025-06-02

Organization Multiscale Robotics Lab

Hosts Hu Minghan

Topics Engineering and Technology , Chemistry

Better Scaling Laws for Neuromorphic Systems

Robotics and Perception

This project explores and extends the novel "deep state-space models" framework by leveraging their transfer function representations.

Keywords

deep learning, state space models, transfer function, parameterizations, S4 model, Fourier Transform, Convolution, Neuromorphic Systems, Neuromorphic, Sequence Modeling, Event-based Vision

Labels

Semester Project , Master Thesis

Description

This project explores and extends the novel "deep state-space models" framework by leveraging their transfer function representations. In contrast to time-domain parameterizations (e.g., S4 layers), transfer function parameterization enables direct computation of the model’s corresponding convolutional kernel via a single Fast Fourier Transform. This is state-free, and in theory, it would maintain constant memory and computational overhead regardless of the state size, therefore offering substantial speed and scalability advantages over existing approaches. Building on these promising theoretical results, this project aims to derive better scaling laws for neuromorphic systems by studying and deploying state-free inference in diverse long-sequence and event-based vision applications.

Goal

Implement the transfer function-based state-space model, then comprehensively benchmark its training speed, memory usage, and performance on neuromorphic and event-based vision tasks. Investigate how state-free inference behaves as model size and sequence length grow, deriving empirical or theoretical scaling relationships. Compare this approach with other state-of-the-art methods (e.g., S4, Transformer-based models) in terms of speed, memory footprint, and model accuracy or task performance. Prerequisites include familiarity with basics of LTI systems and linear ODEs, and Python programming language.

Contact Details

Interested candidates should send their CV, transcripts (bachelor and master) to Nikola Zubic (zubic@ifi.uzh.ch), Marco Cannici (cannici@ifi.uzh.ch) and Davide Scaramuzza (sdavide@ifi.uzh.ch).

More information

Open this project... 

Published since: 2025-03-18

Applications limited to University of Zurich , ETH Zurich

Organization Robotics and Perception

Hosts Zubic Nikola

Topics Mathematical Sciences , Information, Computing and Communication Sciences , Engineering and Technology

Generalist Excavator Transformer

Robotic Systems Lab

We want to develop a generalist digging agent that is able to do multiple tasks, such as digging and moving loose soil, and/or control multiple excavators. We plan to use decision transformers, trained on offline data, to accomplish these tasks.

Keywords

Offline reinforcement learning, transformers, autonomous excavation

Labels

Semester Project , Master Thesis

PLEASE LOG IN TO SEE DESCRIPTION

This project is set to limited visibility by its publisher. To see the project description you need to log in at SiROP. Please follow these instructions:

  • Click link "Open this project..." below.
  • Log in to SiROP using your university login or create an account to see the details.

If your affiliation is not created automatically, please follow these instructions: http://bit.ly/sirop-affiliate

More information

Open this project... 

Published since: 2025-03-11 , Earliest start: 2025-07-01 , Latest end: 2025-12-31

Organization Robotic Systems Lab

Hosts Werner Lennart , Egli Pascal Arturo , Terenzi Lorenzo , Nan Fang , Zhang Weixuan

Topics Information, Computing and Communication Sciences

Master Thesis: Contact force evaluation of robotic endoscopic system based on Series Elastic Actuation

Bio-Inspired RObots for MEDicine-Laboratory (BIROMED-Lab)

In the BIROMED-Lab we have been developing an endoscopic system for safer neurosurgeries with inspiration from human finger anatomy. Its two degrees of freedom allow the endoscope to investigate areas of the brain that would be inaccessible with standard rigid endoscopes. Thanks to springs in the transmission between the motors and the movable endoscope tip, the interaction forces between the instrument and the brain tissue can be reduced. Furthermore the interaction forces can be estimated by measuring the deflection of the spring. To make the telemanipulation of the endoscope safer and more intuitive for the surgeon, force feedback was also implemented.

Keywords

Robotic surgery, Neurosurgery, Telemanipulation, Haptic feedback, Robotic endoscope

Labels

Master Thesis

Description

This master thesis project will focus on the implementation of an additional degree of freedom for insertion, and the following evaluation of the endoscope. Your task will be to develop a test strategy, using a sensorized brain phantom and perform the evaluation experiments.

Goal

Test telemanipulation and contact forces of robotic endoscope for neurosurgery

Contact Details

Sara Lisa Margherita Ettori (PhD candidate): saralisamargherita.ettori@unibas.ch

More information

Open this project... 

Published since: 2025-03-06 , Earliest start: 2025-03-01

Organization Bio-Inspired RObots for MEDicine-Laboratory (BIROMED-Lab)

Hosts Ettori Sara Lisa Margherita , Gerig Nicolas, Dr. , Sommerhalder Michael

Topics Engineering and Technology

Beyond Value Functions: Stable Robot Learning with Monte-Carlo GRPO

Robotic Systems Lab

Robotics is dominated by on-policy reinforcement learning: the paradigm of training a robot controller by iteratively interacting with the environment and maximizing some objective. A crucial idea to make this work is the Advantage Function. On each policy update, algorithms typically sum up the gradient log probabilities of all actions taken in the robot simulation. The advantage function increases or decreases the probabilities of these taken actions by comparing their “goodness” versus a baseline. Current advantage estimation methods use a value function to aggregate robot experience and hence decrease variance. This improves sample efficiency at the cost of introducing some bias. Stably training large language models via reinforcement learning is well-known to be a challenging task. A line of recent work [1, 2] has used Group-Relative Policy Optimization (GRPO) to achieve this feat. In GRPO, a series of answers are generated for each query-answer pair. The advantage is calculated based on a given answer being better than the average answer to the query. In this formulation, no value function is required. Can we adapt GRPO towards robot learning? Value Functions are known to cause issues in training stability [3] and a result in biased advantage estimates [4]. We are in the age of GPU-accelerated RL [5], training policies by simulating thousands of robot instances simultaneously. This makes a new monte-carlo (MC) approach towards RL timely, feasible and appealing. In this project, the student will be tasked to investigate the limitations of value-function based advantage estimation. Using GRPO as a starting point, the student will then develop MC-based algorithms that use the GPU’s parallel simulation capabilities for stable RL training for unbiased variance reduction while maintaining a competitive wall-clock time.

Keywords

Robot Learning, Reinforcement Learning, Monte Carlo RL, GRPO, Advantage Estimation

Labels

Semester Project , Bachelor Thesis , Master Thesis

Description

Co-supervised by Jing Yuan Luo (Mujoco)

Work Packages

  • Literature research
  • Investigate the bias and variance properties of the PPO value function
  • Design and implement a novel algorithm that achieves variance reduction through monte carlo sampling via massive environment parallelism
  • Re-implement existing SOTA algorithms as benchmarks
  • Bonus: provide theoretical insights to justify your proposed monte carlo method

Requirements

  • Background in Learning
  • Excellent knowledge of Python

Contact Details

Student(s) Name(s)

Project Report Abstract

More information

Open this project... 

Published since: 2025-03-05

Organization Robotic Systems Lab

Hosts Klemm Victor

Topics Information, Computing and Communication Sciences , Engineering and Technology , Behavioural and Cognitive Sciences

Electrical Flow-Based Graph Embeddings for Event-based Vision and other downstream tasks

Robotics and Perception

This project explores a novel approach to graph embeddings using electrical flow computations.

Keywords

graph neural networks, graph representation learning, spectral graph theory, network analysis, electrical flow, event-based vision, low-dimensional graph representations

Labels

Master Thesis

Description

Besides RPG, this project will be co-supervised by Aurelio Sulser (from Alg. & Opt. group at ETH) and prof. Siddhartha Mishra.

This project explores a novel approach to graph embeddings using electrical flow computations. By leveraging the efficiency of solving systems of linear equations and some properties of electrical flows, we aim to develop a new method for creating low-dimensional representations of graphs. These embeddings have the potential to capture unique structural and dynamic properties of networks. The project will investigate how these electrical flow-based embeddings can be utilized in various downstream tasks such as node classification, link prediction, graph classification and event-based vision tasks.

Goal

The primary goal of this project is to design, implement, and evaluate a graph embedding technique based on electrical flow computations. The student will develop algorithms to compute these embeddings efficiently, compare them with existing graph embedding methods, and apply them to real-world network datasets. The project will also explore the effectiveness of these embeddings in downstream machine learning tasks. Applicants should have a strong background in graph theory, linear algebra, and machine learning, as well as proficiency in Python and ideally experience with graph processing libraries like NetworkX or graph-tool.

Contact Details

Interested candidates should send their CV, transcripts (bachelor and master) to Nikola Zubic (zubic@ifi.uzh.ch), Aurelio Sulser (asulser@student.ethz.ch) and Davide Scaramuzza (sdavide@ifi.uzh.ch).

More information

Open this project... 

Published since: 2025-03-04 , Earliest start: 2024-09-02

Applications limited to University of Zurich , ETH Zurich

Organization Robotics and Perception

Hosts Zubic Nikola

Topics Mathematical Sciences , Information, Computing and Communication Sciences , Behavioural and Cognitive Sciences

Leveraging Long Sequence Modeling for Drone Racing

Robotics and Perception

Study the application of Long Sequence Modeling techniques within Reinforcement Learning (RL) to improve autonomous drone racing capabilities.

Keywords

long sequence modeling, state-space models, convolutional neural networks, CNNs, recurrent neural networks, RNNs, sequence dynamics, dynamical systems, reinforcement learning, RL, optimal control, drone racing, machine learning, autonomous navigation

Labels

Master Thesis

Description

Recent advancements in machine learning have highlighted the potential of Long Sequence Modeling as a powerful approach for handling complex temporal dependencies, positioning it as a compelling alternative to traditional Transformer-based models. In the context of drone racing, where split-second decision-making and precise control are of greatest importance, Long Sequence Modeling can offer significant improvements. These models are adept at capturing intricate state dynamics and handling continuous-time parameters, providing the flexibility to adapt to varying time steps essential for high-speed navigation and obstacle avoidance. This project aims to bridge this gap by investigating the application of Long Sequence Modeling techniques in RL to develop advanced autonomous drone racing systems. The ultimate goal is to improve autonomous drones' performance, reliability, and adaptability in competitive racing scenarios.

Goal

Develop a Reinforcement Learning framework based on Long Sequence Modeling tailored for drone racing. Simulate the framework to evaluate its performance in controlled environments. Conduct a comprehensive analysis of the framework’s effectiveness in handling long sequences and dynamic racing scenarios. Ideally, the optimized model should be deployed in real-world drone racing settings to validate its practical applicability and performance.

Contact Details

Interested candidates should send their CV, transcripts (bachelor and master) to Nikola Zubic (zubic@ifi.uzh.ch), Angel Romero Aguilar (roagui@ifi.uzh.ch) and Davide Scaramuzza (sdavide@ifi.uzh.ch).

More information

Open this project... 

Published since: 2025-03-04

Applications limited to ETH Zurich , University of Zurich

Organization Robotics and Perception

Hosts Zubic Nikola

Topics Information, Computing and Communication Sciences , Engineering and Technology

Neural Architecture Knowledge Transfer for Event-based Vision

Robotics and Perception

Perform knowledge distillation from Transformers to more energy-efficient neural network architectures for Event-based Vision.

Keywords

deep neural networks, knowledge transfer, knowledge distillation, event cameras, event-based vision, sequence modeling, transformers, low-energy vision

Labels

Master Thesis

Description

Processing the sparse and asynchronous data from event-based cameras presents significant challenges. Transformer-based models have achieved remarkable results in sequence modeling tasks, including event-based vision, due to their powerful representation capabilities. Despite their success, their high computational complexity and memory demands make them impractical for deployment on resource-constrained devices typical in real-world applications. Recent advancements in efficient sequence modeling architectures offer promising alternatives that provide competitive performance with significantly reduced computational overhead. Recognizing that Transformers already demonstrate strong performance on event-based vision tasks, we aim to leverage their strengths while addressing efficiency concerns.

Goal

Study knowledge transfer techniques to transfer knowledge from complex Transformer models to simpler, more efficient models. Test the developed models on benchmark event-based vision tasks such as object recognition, optical flow estimation, and SLAM.

Contact Details

Interested candidates should send their CV, transcripts (bachelor and master) to Nikola Zubic (zubic@ifi.uzh.ch), Giovanni Cioffi (cioffi@ifi.uzh.ch) and Davide Scaramuzza (sdavide@ifi.uzh.ch).

More information

Open this project... 

Published since: 2025-03-04

Applications limited to ETH Zurich , University of Zurich

Organization Robotics and Perception

Hosts Zubic Nikola

Topics Information, Computing and Communication Sciences , Engineering and Technology

Leveraging Human Motion Data from Videos for Humanoid Robot Motion Learning

ETH Competence Center - ETH AI Center

The advancement in humanoid robotics has reached a stage where mimicking complex human motions with high accuracy is crucial for tasks ranging from entertainment to human-robot interaction in dynamic environments. Traditional approaches in motion learning, particularly for humanoid robots, rely heavily on motion capture (MoCap) data. However, acquiring large amounts of high-quality MoCap data is both expensive and logistically challenging. In contrast, video footage of human activities, such as sports events or dance performances, is widely available and offers an abundant source of motion data. Building on recent advancements in extracting and utilizing human motion from videos, such as the method proposed in WHAM (refer to the paper "Learning Physically Simulated Tennis Skills from Broadcast Videos"), this project aims to develop a system that extracts human motion from videos and applies it to teach a humanoid robot how to perform similar actions. The primary focus will be on extracting dynamic and expressive motions from videos, such as soccer player celebrations, and using these extracted motions as reference data for reinforcement learning (RL) and imitation learning on a humanoid robot.

Labels

Master Thesis

Description

Work packages

Literature research

Global motion reconstruction from videos.

Learning from reconstructed motion demonstrations with reinforcement learning on a humanoid robot.

Requirements

Strong programming skills in Python

Experience in computer vision and reinforcement learning

Publication

This project will mostly focus on algorithm design and system integration. Promising results will be submitted to machine learning / computer vision / robotics conferences.

Related literature

Yuan, Y., Iqbal, U., Molchanov, P., Kitani, K. and Kautz, J., 2022. Glamr: Global occlusion-aware human mesh recovery with dynamic cameras. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 11038-11049).

YUAN, Y. and Makoviychuk, V., 2023. Learning physically simulated tennis skills from broadcast videos.

Shin, S., Kim, J., Halilaj, E. and Black, M.J., 2024. Wham: Reconstructing world-grounded humans with accurate 3d motion. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 2070-2080).

Peng, X.B., Abbeel, P., Levine, S. and Van de Panne, M., 2018. Deepmimic: Example-guided deep reinforcement learning of physics-based character skills. ACM Transactions On Graphics (TOG), 37(4), pp.1-14.

Goal

The objective of this project is to develop a robust system for extracting human motions from video footage and transferring these motions to a humanoid robot using learning from demonstration techniques. The system will be designed to handle the noisy data typically associated with video-based motion extraction and ensure that the humanoid robot can replicate the extracted motions with high fidelity while respecting physical rules.

Proposed Methodology

Video Data Collection and Motion Extraction:

  • Collect video footage of soccer player celebrations and other dynamic human activities.

  • Starting from existing monocular human pose/motion estimation algorithms to extract 3D motion data from the videos.

  • Incorporate physics-based corrections similar to those employed in WHAM to address issues like jitter, foot sliding, and ground penetration in the extracted motion data.

Motion Learning:

  • Applying existing learning from demonstration algorithms in a simulated environment to replicate kinematic motions reconstructed from the videos while respecting physical rules using reinforcement learning.

Implementation on Humanoid Robot:

  • This is encouraged since we have our robot lying there waiting for you.

Contact Details

Please include your CV and transcript in the submission.

Manuel Kaufmann

https://ait.ethz.ch/people/kamanuel

kamanuel@inf.ethz.ch

Chenhao Li

https://breadli428.github.io/

chenhli@ethz.ch

More information

Open this project... 

Published since: 2025-02-25

Applications limited to ETH Zurich , EPFL - Ecole Polytechnique Fédérale de Lausanne

Organization ETH Competence Center - ETH AI Center

Hosts Li Chenhao , Kaufmann Manuel , Li Chenhao , Li Chenhao , Kaufmann Manuel , Li Chenhao

Topics Engineering and Technology

Learning Agile Dodgeball Behaviors for Humanoid Robots

ETH Competence Center - ETH AI Center

Agility and rapid decision-making are vital for humanoid robots to safely and effectively operate in dynamic, unstructured environments. In human contexts—whether in crowded spaces, industrial settings, or collaborative environments—robots must be capable of reacting to fast, unpredictable changes in their surroundings. This includes not only planned navigation around static obstacles but also rapid responses to dynamic threats such as falling objects, sudden human movements, or unexpected collisions. Developing such reactive capabilities in legged robots remains a significant challenge due to the complexity of real-time perception, decision-making under uncertainty, and balance control. Humanoid robots, with their human-like morphology, are uniquely positioned to navigate and interact with human-centered environments. However, achieving fast, dynamic responses—especially while maintaining postural stability—requires advanced control strategies that integrate perception, motion planning, and balance control within tight time constraints. The task of dodging fast-moving objects, such as balls, provides an ideal testbed for studying these capabilities. It encapsulates several core challenges: rapid object detection and trajectory prediction, real-time motion planning, dynamic stability maintenance, and reactive behavior under uncertainty. Moreover, it presents a simplified yet rich framework to investigate more general collision avoidance strategies that could later be extended to complex real-world interactions. In robotics, reactive motion planning for dynamic environments has been widely studied, but primarily in the context of wheeled robots or static obstacle fields. Classical approaches focus on precomputed motion plans or simple reactive strategies, often unsuitable for highly dynamic scenarios where split-second decisions are critical. In the domain of legged robotics, maintaining balance while executing rapid, evasive maneuvers remains a challenging problem. Previous work on dynamic locomotion has addressed agile behaviors like running, jumping, or turning (e.g., Hutter et al., 2016; Kim et al., 2019), but these movements are often planned in advance rather than triggered reactively. More recent efforts have leveraged reinforcement learning (RL) to enable robots to adapt to dynamic environments, demonstrating success in tasks such as obstacle avoidance, perturbation recovery, and agile locomotion (Peng et al., 2017; Hwangbo et al., 2019). However, many of these approaches still struggle with real-time constraints and robustness in high-speed, unpredictable scenarios. Perception-driven control in humanoids, particularly for tasks requiring fast reactions, has seen advances through sensor fusion, visual servoing, and predictive modeling. For example, integrating vision-based object tracking with dynamic motion planning has enabled robots to perform tasks like ball catching or blocking (Ishiguro et al., 2002; Behnke, 2004). Yet, dodging requires a fundamentally different approach: instead of converging toward an object (as in catching), the robot must predict and strategically avoid the object’s trajectory while maintaining balance—often in the presence of limited maneuvering time. Dodgeball-inspired robotics research has been explored in limited contexts, primarily using wheeled robots or simplified agents in simulations. Few studies have addressed the challenges of high-speed evasion combined with the complexities of humanoid balance and multi-joint coordination. This project aims to bridge that gap by developing learning-based methods that enable humanoid robots to reactively avoid fast-approaching objects in real time, while preserving stability and agility.

Labels

Master Thesis

Description

Work packages

Literature research

Utilize simulation platforms (e.g., Isaac Lab) for initial policy development and training.

Explore model-free RL approaches, potentially incorporating curriculum learning to gradually increase task complexity.

Investigate perception models for object detection and trajectory forecasting, possibly leveraging lightweight deep learning architectures for real-time processing.

Implement and test learned behaviors on a physical humanoid robot, addressing the challenges of sim-to-real transfer through domain randomization or fine-tuning.

Requirements

Solid foundation in robotics, control theory, and machine learning.

Experience with reinforcement learning frameworks (e.g., PyTorch, TensorFlow, or RLlib).

Familiarity with robot simulation environments (e.g., MuJoCo, Gazebo) and real-world robot control.

Strong programming skills (Python, C++) and experience with sensor data processing.

Publication

This project will mostly focus on algorithm design and system integration. Promising results will be submitted to machine learning / robotics conferences.

Goal

Perception & Prediction

  • Develop a real-time perception pipeline capable of detecting and tracking incoming projectiles. Utilize camera data or external motion capture systems to predict ball trajectories accurately under varying speeds and angles.

Reactive Motion Planning

  • Design algorithms that plan evasive maneuvers (e.g., side-steps, ducks, or rotational movements) within milliseconds of detecting an incoming threat, ensuring the robot’s center of mass remains stable throughout.

Learning-Based Control

  • Apply reinforcement learning or imitation learning to optimize dodge behaviors, balancing between minimal energy expenditure and maximum evasive success. Investigate policy architectures that enable rapid reactions while handling noisy observations and sensor delays.

Robustness & Evaluation

  • Test the system under diverse scenarios, including multi-ball environments and varying throw speeds. Evaluate the robot’s success rate, energy efficiency, and post-dodge recovery capabilities.

Implementation on Humanoid Robot:

  • This is encouraged since we have our robot lying there waiting for you.

Contact Details

Please include your CV and transcript in the submission.

Chenhao Li

https://breadli428.github.io/

chenhli@ethz.ch

More information

Open this project... 

Published since: 2025-02-25

Applications limited to ETH Zurich , EPFL - Ecole Polytechnique Fédérale de Lausanne

Organization ETH Competence Center - ETH AI Center

Hosts Li Chenhao , Li Chenhao , Li Chenhao , Li Chenhao

Topics Engineering and Technology

Learning Real-time Human Motion Tracking on a Humanoid Robot

ETH Competence Center - ETH AI Center

Humanoid robots, designed to mimic the structure and behavior of humans, have seen significant advancements in kinematics, dynamics, and control systems. Teleoperation of humanoid robots involves complex control strategies to manage bipedal locomotion, balance, and interaction with environments. Research in this area has focused on developing robots that can perform tasks in environments designed for humans, from simple object manipulation to navigating complex terrains. Reinforcement learning has emerged as a powerful method for enabling robots to learn from interactions with their environment, improving their performance over time without explicit programming for every possible scenario. In the context of humanoid robotics and teleoperation, RL can be used to optimize control policies, adapt to new tasks, and improve the efficiency and safety of human-robot interactions. Key challenges include the high dimensionality of the action space, the need for safe exploration, and the transfer of learned skills across different tasks and environments. Integrating human motion tracking with reinforcement learning on humanoid robots represents a cutting-edge area of research. This approach involves using human motion data as input to train RL models, enabling the robot to learn more natural and human-like movements. The goal is to develop systems that can not only replicate human actions in real-time but also adapt and improve their responses over time through learning. Challenges in this area include ensuring real-time performance, dealing with the variability of human motion, and maintaining stability and safety of the humanoid robot.

Keywords

real-time, humanoid, reinforcement learning, representation learning

Labels

Master Thesis

Description

Work packages

Literature research

Human motion capture and retargeting

Skill space development

Hardware validation encouraged upon availability

Requirements

Strong programming skills in Python

Experience in reinforcement learning and imitation learning frameworks

Publication

This project will mostly focus on algorithm design and system integration. Promising results will be submitted to robotics or machine learning conferences where outstanding robotic performances are highlighted.

Related literature

Peng, Xue Bin, et al. "Deepmimic: Example-guided deep reinforcement learning of physics-based character skills." ACM Transactions On Graphics (TOG) 37.4 (2018): 1-14.

Starke, Sebastian, et al. "Deepphase: Periodic autoencoders for learning motion phase manifolds." ACM Transactions on Graphics (TOG) 41.4 (2022): 1-13.

Li, Chenhao, et al. "FLD: Fourier latent dynamics for Structured Motion Representation and Learning."

Serifi, A., Grandia, R., Knoop, E., Gross, M. and Bächer, M., 2024, December. Vmp: Versatile motion priors for robustly tracking motion on physical characters. In Computer Graphics Forum (Vol. 43, No. 8, p. e15175).

Fu, Z., Zhao, Q., Wu, Q., Wetzstein, G. and Finn, C., 2024. Humanplus: Humanoid shadowing and imitation from humans. arXiv preprint arXiv:2406.10454.

He, T., Luo, Z., Xiao, W., Zhang, C., Kitani, K., Liu, C. and Shi, G., 2024, October. Learning human-to-humanoid real-time whole-body teleoperation. In 2024 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (pp. 8944-8951). IEEE.

He, T., Luo, Z., He, X., Xiao, W., Zhang, C., Zhang, W., Kitani, K., Liu, C. and Shi, G., 2024. Omnih2o: Universal and dexterous human-to-humanoid whole-body teleoperation and learning. arXiv preprint arXiv:2406.08858.

Goal

Contact Details

Please include your CV and transcript in the submission.

Chenhao Li

https://breadli428.github.io/

chenhli@ethz.ch

More information

Open this project... 

Published since: 2025-02-25

Organization ETH Competence Center - ETH AI Center

Hosts Li Chenhao , Li Chenhao , Li Chenhao , Li Chenhao

Topics Information, Computing and Communication Sciences

Acoustic Standing Waves for Particle Manipulation in Air

Acoustic Robotics for Life Sciences and Healthcare (ARSL)

Acoustic standing waves can be used to manipulate physical objects in both gas and liquid environments. This project investigates their effects on particle flow and selectivity in air, considering various particle sizes and weights. Through modeling, simulations, and experimental validation, we aim to characterize the selectivity of these waves and develop a compact driver circuit for practical implementation. The student will work closely with Honeywell engineers on test setups, electronic designs, and prototyping. A successful outcome may lead to a subsequent R&D phase or PhD project in a collaboration with Honeywellto further develop these findings.

Keywords

Acoustics, Standing Waves, Particle Manipulation, Flow Control, Electronics

Labels

Master Thesis

Description

Acoustic standing waves have been widely used to control and manipulate particles in liquid environments, but their potential in gas and air-based systems remains underexplored. By tuning acoustic parameters, it is possible to create virtual tweezers and particle streams, enabling selective manipulation.

This project aims to:

Investigate the effect of acoustic standing waves on particle transport in air. Characterize particle selectivity based on size and weight. Develop a compact driver circuit for airborne acoustic manipulation.

The student will engage in both theoretical modeling and experimental validation, collaborating with Honeywell engineers on electronic circuit design, hardware prototyping, and test setups. A successful outcome could lead to an extended R&D phase or a potential PhD project to further develop and implement these findings.

Goal

The project includes the following tasks:

Developing simulation models to analyze the effect of acoustic standing waves on particle transport in air. Characterizing particle selectivity for different sizes, weights, and flow parameters. Building a proof-of-concept (POC) demonstrator showcasing the ability to manipulate at least one type of particle. Estimating the driving circuit power envelope and cost for practical implementation.

Contact Details

Please send your CV and transcript of records to: Mahmoud Medany – mmedany@ethz.ch and Prof. Daniel Ahmed - dahmed@ethz.ch

More information

Open this project... 

Published since: 2025-02-25 , Earliest start: 2025-02-25 , Latest end: 2025-09-30

Organization Acoustic Robotics for Life Sciences and Healthcare (ARSL)

Hosts Medany Mahmoud

Topics Engineering and Technology , Physics

Loosely Guided Reinforcement Learning for Humanoid Parkour

ETH Competence Center - ETH AI Center

Humanoid robots hold the promise of navigating complex, human-centric environments with agility and adaptability. However, training these robots to perform dynamic behaviors such as parkour—jumping, climbing, and traversing obstacles—remains a significant challenge due to the high-dimensional state and action spaces involved. Traditional Reinforcement Learning (RL) struggles in such settings, primarily due to sparse rewards and the extensive exploration needed for complex tasks. This project proposes a novel approach to address these challenges by incorporating loosely guided references into the RL process. Instead of relying solely on task-specific rewards or complex reward shaping, we introduce a simplified reference trajectory that serves as a guide during training. This trajectory, often limited to the robot's base movement, reduces the exploration burden without constraining the policy to strict tracking, allowing the emergence of diverse and adaptable behaviors. Reinforcement Learning has demonstrated remarkable success in training agents for tasks ranging from game playing to robotic manipulation. However, its application to high-dimensional, dynamic tasks like humanoid parkour is hindered by two primary challenges: Exploration Complexity: The vast state-action space of humanoids leads to slow convergence, often requiring millions of training steps. Reward Design: Sparse rewards make it difficult for the agent to discover meaningful behaviors, while dense rewards demand intricate and often brittle design efforts. By introducing a loosely guided reference—a simple trajectory representing the desired flow of the task—we aim to reduce the exploration space while maintaining the flexibility of RL. This approach bridges the gap between pure RL and demonstration-based methods, enabling the learning of complex maneuvers like climbing, jumping, and dynamic obstacle traversal without heavy reliance on reward engineering or exact demonstrations.

Keywords

humanoid, reinforcement learning, loosely guided

Labels

Master Thesis

Description

Work packages

Design a Loosely Guided RL Framework that integrates simple reference trajectories into the training loop.

Evaluate Exploration Efficiency by comparing baseline RL methods with the guided approach.

Demonstrate Complex Parkour Behaviors such as climbing, jumping, and dynamic traversal using the guided RL policy.

Hardware validation encouraged

Requirements

Strong programming skills in Python

Experience in reinforcement learning and imitation learning frameworks

Publication

This project will mostly focus on algorithm design and system integration. Promising results will be submitted to robotics or machine learning conferences where outstanding robotic performances are highlighted.

Related literature

Peng, Xue Bin, et al. "Deepmimic: Example-guided deep reinforcement learning of physics-based character skills." ACM Transactions On Graphics (TOG) 37.4 (2018): 1-14.

Li, C., Vlastelica, M., Blaes, S., Frey, J., Grimminger, F. and Martius, G., 2023, March. Learning agile skills via adversarial imitation of rough partial demonstrations. In Conference on Robot Learning (pp. 342-352). PMLR.

Li, Chenhao, et al. "FLD: Fourier latent dynamics for Structured Motion Representation and Learning."

Serifi, A., Grandia, R., Knoop, E., Gross, M. and Bächer, M., 2024, December. Vmp: Versatile motion priors for robustly tracking motion on physical characters. In Computer Graphics Forum (Vol. 43, No. 8, p. e15175).

Fu, Z., Zhao, Q., Wu, Q., Wetzstein, G. and Finn, C., 2024. Humanplus: Humanoid shadowing and imitation from humans. arXiv preprint arXiv:2406.10454.

He, T., Luo, Z., Xiao, W., Zhang, C., Kitani, K., Liu, C. and Shi, G., 2024, October. Learning human-to-humanoid real-time whole-body teleoperation. In 2024 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (pp. 8944-8951). IEEE.

He, T., Luo, Z., He, X., Xiao, W., Zhang, C., Zhang, W., Kitani, K., Liu, C. and Shi, G., 2024. Omnih2o: Universal and dexterous human-to-humanoid whole-body teleoperation and learning. arXiv preprint arXiv:2406.08858.

Goal

Contact Details

Please include your CV and transcript in the submission.

Chenhao Li

https://breadli428.github.io/

chenhli@ethz.ch

More information

Open this project... 

Published since: 2025-02-25

Organization ETH Competence Center - ETH AI Center

Hosts Li Chenhao , Li Chenhao , Li Chenhao , Li Chenhao

Topics Information, Computing and Communication Sciences

Piezoelectric Atomization: Optimizing Liquid and Gel Dispersion

Acoustic Robotics for Life Sciences and Healthcare (ARSL)

Piezoelectric elements are widely used for particle manipulation and atomization, with applications in humidification, cooling, and medical aerosol generation. However, temperature and environmental factors can impact the efficiency of vaporization and the properties of the generated droplets. Additionally, the heat generated by piezo elements affects particle size and flux, requiring careful control. This project will investigate the effect of piezoelectric elements on liquid and gel atomization, optimizing power consumption, repeatability, and calibration. A proof-of-concept demonstrator will be developed to study these parameters under controlled conditions. A successful outcome may lead to a subsequent R&D phase or PhD project in collaboration with Honeywell to further develop these findings

Keywords

Piezoelectric Atomization, Liquid/Gel Dispersion, Energy Efficiency, Particle Size Control, Low-Power Electronics

Labels

Master Thesis

Description

piezo elements have been used for various particle manipulation applications, from humidifiers to large-scale cooling solutions. However, low temperatures and environmental factors can complicate the vaporization process, affecting performance and efficiency. Additionally, piezo elements generate heat, which influences the size and distribution of atomized particles.

This project aims to:

Investigate the effect of piezoelectric elements on liquid and gel atomization under different conditions (viscosity, temperature, environmental factors). Develop a proof-of-concept (POC) demonstrator to validate the feasibility of controlled atomization.

Optimize the driving circuit with a focus on low power consumption and efficiency.

Evaluate system parameters, ensuring repeatability and calibration of particle size and flux.

The project will combine theoretical modeling, circuit design, and experimental validation, providing new insights into piezo-based atomization technologies.

The student will engage in both theoretical modeling and experimental validation, collaborating with Honeywell engineers on electronic circuit design, hardware prototyping, and test setups. A successful outcome could lead to an extended R&D phase or a potential PhD project to further develop and implement these findings.

Goal

Deploying a proof-of-concept (TRL3) demonstrator to showcase the physical principles of piezo-driven atomization.

Optimizing the driving circuit to minimize power consumption while maintaining effective atomization.

Evaluating the atomization process, focusing on repeatability, particle size control, and flux calibration.

Contact Details

Please send your CV and transcript of records to: Mahmoud Medany – mmedany@ethz.ch and Prof. Daniel Ahmed - dahmed@ethz.ch

More information

Open this project... 

Published since: 2025-02-25 , Earliest start: 2025-02-25 , Latest end: 2025-09-30

Organization Acoustic Robotics for Life Sciences and Healthcare (ARSL)

Hosts Medany Mahmoud

Topics Engineering and Technology , Physics