⚙️

ML Ops

Station F, Paris
Full Time
Open
Apply

About the job

Neuralk-AI is looking for an experienced MLOps Engineer to deploy our AI embedding models on our platform and work hand in hand with our science team to accelerate research.

You will report to the CSO of Neuralk and will be located in our Paris offices.

About Neuralk

We are a passionate team leading the way in AI innovation, committed to driving the rapid adoption of transformative AI applications. Our focus is on developing the technical tools to allow any company to build AI applications that natively interact with their structured databases (tabular or graph databases). Specifically, we develop a modern AI embedding platform to convert any structured database to a vectorstore that can later be combined with classic Machine Learning models for classification, regression or clustering purposes.

As an early-stage AI-driven startup backed by significant funding (>3m), we base our approach on state-of-the-art academic research to drive practical business solutions. We value clear communication and simplicity in our approaches, promoting a constant optimization mindset.

Join Neuralk to be part of a growing team, eager to learn and adapt, united by the belief that our technology can make a significant positive impact and contribute to transforming the AI industry.

Co-founders: Alexandre Pasquiou (CSO) & Antoine Moissenot (CEO).

Neuralk is dedicated to equal opportunity employment and fosters an environment that is open and respectful of diversity. All applicants are encouraged to apply, even if you don’t meet all requirements. If you have passion for our mission, learn quickly and believe you can contribute, we want to hear from you.

Mission Highlights

As an MLOps Engineer, your mission will be to bridge the gap between machine learning research and production systems, ensuring seamless integration, deployment, and management of AI models on a large scale. You will collaborate closely with our research, data, and engineering teams (~5 people) to ensure the scalability, performance, and reliability of our AI-driven solutions, focusing on automation, model lifecycle management, and continuous delivery.

Role & Responsibilities

In this role, you will drive the operationalization of our machine learning models, directly supporting the company’s mission to make AI accessible and scalable. You will be responsible for:

  • Model Deployment & Automation: Design, develop, and optimize continuous integration/continuous deployment (CI/CD) pipelines for deploying machine learning models. Automate model training, testing, deployment, and monitoring to ensure efficiency and reliability.
  • Infrastructure Management: Build and maintain scalable infrastructure for machine learning workflows, leveraging cloud environments, container orchestration (Docker, Kubernetes), and monitoring tools.
  • Model Versioning and Lifecycle Management: Oversee the lifecycle of machine learning models, including versioning, governance, and deprecation strategies, ensuring proper integration with the platform and other tools.
  • Monitoring & Debugging: Implement robust monitoring systems to track the performance of live models, proactively identifying issues, and developing tools for model debugging and maintenance.
  • Reproducibility and Traceability: Establish and manage practices that ensure the reproducibility of models, experiments, and pipelines. Implement version control and logging systems for model and data changes.
  • Collaboration: Work closely with data scientists, machine learning engineers, and other stakeholders to design, implement, and manage production-grade model pipelines that enhance the company’s research and engineering capabilities.
  • Platform Optimization: Suggest improvements to the infrastructure to handle large-scale AI model deployments, ensuring scalability, performance, and cost-efficiency.

Profile

  • M.S. or B.S. in Computer Science, Software Engineering, or a closely related field with a focus on DevOps, MLOps, or Machine Learning.
  • 3+ years of experience in machine learning operations, DevOps, or cloud-based infrastructure roles, focusing on deploying, monitoring, and scaling machine learning systems.
  • Experience with cloud platforms (AWS, GCP, Azure) for ML model deployment and management.
  • Strong experience with CI/CD pipelines and version control systems like Git.
  • Familiarity with containerization and orchestration technologies (Docker, Kubernetes).
  • Excellent communication skills in English and a proven ability to work in interdisciplinary teams.
  • Thrives in a fast-paced, evolving startup environment.
  • Self-starter and autonomous, with a focus on operational efficiency and scalability.

Bonuses

  • Experience in building and maintaining large-scale ML infrastructure.
  • Experience with MLOps tools, such as MLflow, Kubeflow, or TFX.
  • Familiarity with observability tools for monitoring machine learning pipelines (Prometheus, Grafana).
  • Proven experience in translating research into production at scale.
  • Strong programming skills in Python and familiarity with ML frameworks (e.g., PyTorch, TensorFlow).

Expertise

  • MLOps: In-depth understanding of MLOps principles, model lifecycle management, automation, and infrastructure scaling.
  • Cloud Infrastructure: Experience in managing cloud-based environments and optimizing resources for scalable ML deployments.
  • CI/CD & Automation: Expertise in setting up and maintaining CI/CD pipelines, automating model training, testing, and deployment.
  • Monitoring & Observability: Familiarity with tools and techniques to track model performance in production, including alerting and debugging.
  • Containerization: Proficiency in Docker and Kubernetes for managing and scaling ML models.
  • Programming: Proficient in Python and other relevant programming languages, with experience in developing and managing infrastructure as code (e.g., Terraform).
Interested in the role?

Get in touch and we will geet back to you shortly.

Recruitment process

Compensation & Benefits

We are a fast-pace startup, yet, we favor a good work-life balance and interesting compensations. We offer:

  • A competitive salary
  • Equity (BSPCE), to reflect the value you bring to Neuralk and to foster a shared journey
  • Comprehensive health insurance
  • French level paid leave and time-off work
  • Dynamic work setting. Although our preference is for in-person collaboration, we will be flexible with occasional remote work arrangements.
  • and more to come as we grow