Robotoddler

Autonomous Robotic Assemblies

Started
June 1, 2023
Status
In Progress
Share this project

Abstract

Industrial robots are widely used for repetitive automation tasks in many industries. For architecture however, the ever-changing form, environment or material of the constructions hinders their wide application, and existing use cases are mostly limited to linear design-plan-fabricate processes. Robotoddler investigates the use of reinforcement learning for the task of multi-robotic assembly of building elements into architectural structures. In a virtual simulation environment, one or multiple robots will learn to pick, move, place or hold different elements (building blocks) through trial- and-error and positive or negative reinforcement. This involves not only learning the robotic movements but also the design decisions, as the location, orientation and sequence of placement are part of the reinforcement learning (RL) parameters.

Once trained, the learned policy from the simulation RL model will be transferred to the real robot setup at the CRCL. A series of experiments with increasing complexity will be conducted, from one to two or more robots, from simple stacking to structures spanning between supports.

People

Collaborators

SDSC Team:
Paul Rolland
Johannes Kirschner

PI | Partners:

EPFL, Lab for Creative Computation:

  • Prof. Stefana Parascho

More info

description

Motivation

Construction and architectural design rely extensively on human expertise throughout planning and execution. Developing assistive, learning-based methods is challenging due to the rigid, multi-stage nature of the design process. End-to-end trained policies hold the promise to capture the entirety of the design and construction process, offering adaptability to unforeseen changes and discovering innovative designs. The main challenge is that the space of possible structures is enormous, and developing learning algorithms to navigate this space efficiently requires state-of-the-art reinforcement and planning methods and possibly novel solutions.

Proposed Approach / Solution

The SDSC partners are working on training reinforcement learning agents for construction tasks in a simulated environment. We are currently investigating several distinct modelling approaches with the goal to master increasingly difficult construction tasks. In a second step, we aim to deploy the trained policies on the robotic setup at the Lab for Creative Computation, EPFL, which will be done in collaboration with the project partners, which already have facilities for collaborative robotic construction (see Figure 1).

Impact

The outcomes of this project may lead to innovations in robotic construction, that allow for novel construction solutions and designs. We aim to demonstrate feasibility of learned policies for construction on the existing robotic platforms. We further expect to improve current reinforcement learning algorithms to handle the challenging construction domain.

Figure 1: Robotic Assembly at the Lab for Creative Computation, EPFL (Photo Credit: Stefana Parascho).

Gallery

Annexe

Additional resources

Bibliography

  1. Funk, N., Chalvatzaki, G., Belousov, B., & Peters, J. (2022, January). Learn2assemble with structured representations and search for robotic architectural construction. In Conference on Robot Learning (pp. 1401-1411). PMLR.
  2. Bapst, V., Sanchez-Gonzalez, A., Doersch, C., Stachenfeld, K., Kohli, P., Battaglia, P., & Hamrick, J. (2019, May). Structured agents for physical construction. In International conference on machine learning (pp. 464-474). PMLR.
  3. Vallat, G., Wang, J., Maddux, A., Kamgarpour, M., & Parascho, S. (2023, October). Reinforcement learning for scaffold-free construction of spanning structures. In Proceedings of the 8th ACM Symposium on Computational Fabrication (pp. 1-12).

Publications

Related Pages

More projects

ML-L3DNDT

Completed
Robust and scalable Machine Learning algorithms for Laue 3-Dimensional Neutron Diffraction Tomography
Big Science Data

BioDetect

Completed
Deep Learning for Biodiversity Detection and Classification
Energy, Climate & Environment

IRMA

In Progress
Interpretable and Robust Machine Learning for Mobility Analysis
No items found.

FLBI

In Progress
Feature Learning for Bayesian Inference
No items found.

News

Latest news

Smartair | An active learning algorithm for real-time acquisition and regression of flow field data
May 1, 2024

Smartair | An active learning algorithm for real-time acquisition and regression of flow field data

Smartair | An active learning algorithm for real-time acquisition and regression of flow field data

We’ve developed a smart solution for wind tunnel testing that learns as it works, providing accurate results faster. It provides an accurate mean flow field and turbulence field reconstruction while shortening the sampling time.
The Promise of AI in Pharmaceutical Manufacturing
April 22, 2024

The Promise of AI in Pharmaceutical Manufacturing

The Promise of AI in Pharmaceutical Manufacturing

Innovation in pharmaceutical manufacturing raises key questions: How will AI change our operations? What does this mean for the skills of our workforce? How will it reshape our collaborative efforts? And crucially, how can we fully leverage these changes?
Efficient and scalable graph generation through iterative local expansion
March 20, 2024

Efficient and scalable graph generation through iterative local expansion

Efficient and scalable graph generation through iterative local expansion

Have you ever considered the complexity of generating large-scale, intricate graphs akin to those that represent the vast relational structures of our world? Our research introduces a pioneering approach to graph generation that tackles the scalability and complexity of creating such expansive, real-world graphs.

Contact us

Let’s talk Data Science

Do you need our services or expertise?
Contact us for your next Data Science project!