|
Building intelligent systems from edge to cloud
Most AI agent demos look impressive and break the moment you connect them to a real CRM. I build the ones that don't.
I architect multi-agent systems that handle end-to-end workflows: lead qualification, CRM updates, message drafting with human approval, natural language analytics. The hard part isn't getting one LLM call to work. It's orchestrating multiple steps, managing context across them, and knowing where a human needs to step in. I've built sales copilots, conversational booking systems, and RAG-powered chatbots for clients across France, the Middle East, and the US, deployed on AWS with isolated databases per tenant.
I also work with Azure AI Search and on-premise LLM deployments for companies that can't send their data to a third-party API. Every system ships with human oversight at the decision points that actually matter.
I work with companies that need ML models in production, not sitting in a Jupyter notebook getting demo'd to investors.
My deployments include content moderation pipelines, ad optimization engines, and anomaly detection systems. The stack depends on the problem. I work across AWS, Azure, and GCP, and I pick the services that fit the use case, not the other way around. I handle the full MLOps cycle: data pipelines, training, deployment, monitoring for drift, and the part nobody talks about, keeping it all running after the initial launch.
I hold AWS and Azure certifications, and I speak regularly at AWS community events.
Edge deployment is where things get tricky because you can't just throw more GPU at the problem.
I handle the full edge CV pipeline: training, fine-tuning, pruning, quantization, and building inference applications that run on constrained hardware in real time. Past applications include automotive safety systems (collision warning, lane detection, traffic sign recognition), agricultural monitoring from satellite and drone imagery, logo detection for brand compliance, and multilingual licence plate recognition.
The interesting part of edge work is fitting an accurate model inside hardware that costs a few hundred dollars. I work across the full pipeline: training on cloud GPUs, pruning and quantizing for the target device, and building the inference application that runs on-site.
I also take on work where the job isn't writing code. Sometimes a company needs someone to figure out where AI fits in their operations, whether to build or buy, and how to phase a rollout without burning money.
I studied Data Science and AI Strategy at emlyon business school and McGill University, so I can hold both conversations: the technical architecture and the business case. I've evaluated startups as a jury member at MWC Barcelona and the Startup World Series. I've led AI workshops across universities and tech events reaching hundreds of attendees. And I've delivered hands-on implementations for clients ranging from early-stage startups to banks and enterprise.
Predictive models tell you what will probably happen. Causal inference tells you what would happen if you changed something. Different question, harder question, and the one that matters when you're making process decisions in manufacturing or operations.
I build causal analysis pipelines that isolate the real drivers behind operational outcomes. The method depends on the data and the question, but the goal is always the same: give decision-makers a clear picture of what actually moves the needle, backed by evidence they can defend in a room full of engineers.
When someone asks 'what happens if we change this process variable?', causal inference gives a grounded answer instead of a forecast with a confidence interval.