5 days ago

Member of Technical Staff - Training Infrastructure Engineer

Liquid AI, an MIT spin-off, is a foundation model company headquartered in Boston, Massachusetts. Our mission is to build capable and efficient general-purpose AI systems at every scale.

Our goal at Liquid is to build the most capable AI systems to solve problems at every scale, such that users can build, access, and control their AI solutions. This is to ensure that AI will get meaningfully, reliably and efficiently integrated at all enterprises. Long term, Liquid will create and deploy frontier-AI-powered solutions that are available to everyone.

What This Role Is
 We're looking for a Training Infrastructure Engineer to design, build, and optimize the distributed systems that power our Liquid Foundation Models (LFMs). This is a highly technical role focused on creating the scalable infrastructure that enables efficient training of models across the spectrum—from compact specialized models to massive multimodal systems—while maximizing hardware utilization and minimizing training time.

You're A Great Fit If

  • You have extensive experience building distributed training infrastructure for language and multimodal models, with hands-on expertise in frameworks like PyTorch Distributed, DeepSpeed, or Megatron-LM
  • You're passionate about solving complex systems challenges in large-scale model training—from efficient multimodal data loading to sophisticated sharding strategies to robust checkpointing mechanisms
  • You have a deep understanding of hardware accelerators  and networking topologies, with the ability to optimize communication patterns for different parallelism strategies
  • You're skilled at identifying and resolving performance bottlenecks in training pipelines, whether they occur in data loading, computation, or communication between nodes
  • You have experience working with diverse data types (text, images, video, audio) and can build data pipelines that handle heterogeneous inputs efficiently
  • ,

    What Sets You Apart

  • You've implemented custom sharding techniques (tensor/pipeline/data parallelism) to scale training across distributed GPU clusters of varying sizes
  • You have experience optimizing data pipelines for multimodal datasets with sophisticated preprocessing requirements
  • You've built fault-tolerant checkpointing systems that can handle complex model states while minimizing training interruptions
  • You've contributed to open-source training infrastructure projects or frameworks
  • You've designed training infrastructure that works efficiently for both parameter-efficient specialized models and massive multimodal systems
  • ,

    What You'll Actually Do

  • Design and implement high-performance, scalable training infrastructure that efficiently utilizes our GPU clusters for both specialized and large-scale multimodal models
  • Build robust data loading systems that eliminate I/O bottlenecks and enable training on diverse multimodal datasets
  • Develop sophisticated checkpointing mechanisms that balance memory constraints with recovery needs across different model scales
  • Optimize communication patterns between nodes to minimize the overhead of distributed training for long-running experiments
  • Collaborate with ML engineers to implement new model architectures and training algorithms at scale
  • Create monitoring and debugging tools to ensure training stability and resource efficiency across our infrastructure
  • ,

    What You'll Gain

  • The opportunity to solve some of the hardest systems challenges in AI, working at the intersection of distributed systems and cutting-edge multimodal machine learning
  • Experience building infrastructure that powers the next generation of foundation models across the full spectrum of model scales
  • The satisfaction of seeing your work directly enable breakthroughs in model capabilities and performance
  • Please mention that you found this job on MoAIJobs, this helps us grow. Thank you!

    Share this job opportunity

    Related Jobs

    Liquid AI
    4 days ago

    Member of Technical Staff - Machine Learning Research Engineer, Post-Training

    Amazon
    3 weeks ago

    Member of Technical Staff, AGI Autonomy

    US, CA, San Francisco
    Amazon
    3 weeks ago

    Member of Technical Staff , AGI Autonomy

    US, CA, San Francisco
    Contextual AI
    2 weeks ago

    Member of Technical Staff (Frontend)

    Mountain View, CA
    Amazon
    5 days ago

    Member of Technical Staff, AGI Autonomy

    US, CA, San Francisco