42dot is seeking an AI Model Optimization and Tool Development Engineer (NPU) to focus on optimizing the autonomous driving stack and on-device large language models (LLMs). This role involves developing AI model optimization techniques for NPUs and building toolchains to ensure efficient execution. The engineer will be responsible for optimizing deep learning models for hardware accelerators, designing and developing toolchains that enhance performance, and supporting the advancement of AI technologies such as autonomous driving and LLMs through hardware-aware optimizations. This position plays a crucial role in bridging AI models with hardware accelerators, ensuring seamless integration and optimal efficiency.
Responsibilities
AI Model Porting and OptimizationPort AI models for LLM and autonomous driving stacks to NPU hardware and optimize their performance. Improve inference speed by utilizing techniques such as model compression (quantization, pruning, etc.), operator fusion, and memory optimization.
Toolchain DevelopmentDesign and implement toolchains for porting AI models to NPUs. Integrate with deep learning frameworks such as TensorFlow and PyTorch to provide an efficient workflow. Develop tools for NPU-specific code generation, profiling, and debugging.
Optimization of Autonomous Driving and LLM StacksOptimize AI modules required for autonomous driving (e.g., object detection, path planning) to ensure compatibility and real-time execution performance. Enhance memory efficiency and speed through LLM inference optimization. Apply model parallelization and distributed execution techniques in multimodal AI stacks.
Performance Analysis and ImprovementAnalyze AI model runtime performance and identify bottlenecks. Implement techniques to maximize hardware utilization.
Research and Adoption of New TechnologiesStudy the latest advancements in AI model optimization and NPU-related technologies. Experiment with and adopt new techniques to maximize NPU performance.,
Qualifications
Bachelor’s or Master’s degree in Computer Science, AI, or a related fieldAt least 3 years of experience in AI model optimization and hardware accelerationExperience optimizing AI models using NPUs, GPUs, or ASICsProficiency in deep learning frameworks and model conversion tools such as TensorFlow Lite, ONNX, and PyTorchExpertise in model compression and optimization techniques, including quantization, pruning, and lazy evaluationProficiency in programming languages such as CUDA, C++, and Python, with experience in writing hardware-accelerated codeStrong understanding of memory management and parallel computing techniques,
Preferred Qualifications
Experience with autonomous driving stacks, including SLAM, path planning, and object recognitionOptimization experience for on-device AI/LLM applicationsFamiliarity with compiler technologies such as LLVM and MLIRExperience in AI optimization for embedded systemsContributions to open-source AI optimization projects,
Interview Process
Application Review → Coding Test → First Interview (~1 hour) → Second Interview (~3 hours) → Final SelectionThe interview process may vary depending on the position and is subject to change based on the schedule and circumstances.Applicants will be individually notified of the interview schedule and results via the email provided in their application.