testpage

AI-Driven Digital Transformation Expert

Max Prasolov is a customer success-oriented data science executive specializing in guiding enterprises through AI-driven digital transformation processes. He brings vast experience in the identification of new market niches for cutting-edge AI technologies to the clients he serves.

Max has extensive and diversified digital transformation experience in Healthcare, Financial Services, Retail & eCommerce, Media, Telecommunication, Metallurgy, Mining. He supports this work with a solid background in Data Science, Machine Learning, Visual Data, and CGI Multimedia.

A C-Level visionary with a strong ability to orchestrate single effort and multifunction activities and support large enterprises through digital transformation challenges, Max is known as an ardent and motivational mentor for modernizing large-scale organizations and startups.

Functional Expertise: Digital Transformation Consulting, Data-Driven StoryTelling, AI and MLOps Solutions & Services.

ai experts

AI-Driven Digital Transformation Expert

Max Prasolov is a customer success-oriented data science executive specializing in guiding enterprises through AI-driven digital transformation processes. He brings vast experience in the identification of new market niches for cutting-edge AI technologies to the clients he serves. Max has extensive and diversified digital transformation experience in Healthcare, Financial Services, Retail & eCommerce, Media, Telecommunication,…

Neu.ro brings MLOps and AI expertise to the Content Authenticity Initiative

Neu.ro is excited to join an amazing group of companies who form the Content Authenticity Initiative, an organization dedicated to the creation of a simple, extensible and distributed media provenance solution. Announced by Adobe in 2019, CAI membership includes The New York Times, the BBC, ARM, Microsoft, Qualcom, and a host of other companies focused on distribution of accurate and verifiable digital content.

With AI-driven content generation becoming an increasingly important part of the media landscape, content creators and consumers are recognizing the positive and negative impacts that technology can have on content provenance. With this in mind, Neu.ro is pleased to bring its technical expertise in AI, Machine Learning, and Deep Learning to the CAI. 

“Our work is focused on establishing the tech stack, processes and workflows that allow ML engineers to build AI as safely as software engineers build traditional technology solutions. This is called Machine Learning Operations (MLOps), and we believe that is foundational to responsible AI.” – Arthur McCallum Director of Partnerships at Neu.ro

The Content Authenticity Initiative is a group of designers, engineers, researchers, journalists, and leaders who seek to address content authenticity at scale. We are focused on cross-industry participation, with an open, extensible approach for providing media transparency that allows for better evaluation of content provenance. This group collaborates with a wide set of representatives from software, publishing, and social media companies, human rights organizations, and academic researchers to develop content attribution standards and tools.

Content Authenticity Initiative

Neu.ro brings MLOps and AI expertise to the Content Authenticity Initiative

Neu.ro is excited to join an amazing group of companies who form the Content Authenticity Initiative, an organization dedicated to the creation of a simple, extensible and distributed media provenance solution. Announced by Adobe in 2019, CAI membership includes The New York Times, the BBC, ARM, Microsoft, Qualcom, and a host of other companies focused…

Neu.ro brings MLOps and AI expertise to the Content Authenticity Initiative

Neu.ro is excited to join an amazing group of companies who form the Content Authenticity Initiative, an organization dedicated to the creation of a simple, extensible and distributed media provenance solution. Announced by Adobe in 2019, CAI membership includes The New York Times, the BBC, ARM, Microsoft, Qualcom, and a host of other companies focused on distribution of accurate and verifiable digital content.

With AI-driven content generation becoming an increasingly important part of the media landscape, content creators and consumers are recognizing the positive and negative impacts that technology can have on content provenance. With this in mind, Neu.ro is pleased to bring its technical expertise in AI, Machine Learning, and Deep Learning to the CAI. 

“Our work is focused on establishing the tech stack, processes and workflows that allow ML engineers to build AI as safely as software engineers build traditional technology solutions. This is called Machine Learning Operations (MLOps), and we believe that is foundational to responsible AI.” – Arthur McCallum Director of Partnerships at Neu.ro

The Content Authenticity Initiative is a group of designers, engineers, researchers, journalists, and leaders who seek to address content authenticity at scale. We are focused on cross-industry participation, with an open, extensible approach for providing media transparency that allows for better evaluation of content provenance. This group collaborates with a wide set of representatives from software, publishing, and social media companies, human rights organizations, and academic researchers to develop content attribution standards and tools.

Content Authenticity Initiative

Neu.ro brings MLOps and AI expertise to the Content Authenticity Initiative

Neu.ro is excited to join an amazing group of companies who form the Content Authenticity Initiative, an organization dedicated to the creation of a simple, extensible and distributed media provenance solution. Announced by Adobe in 2019, CAI membership includes The New York Times, the BBC, ARM, Microsoft, Qualcom, and a host of other companies focused…

Neu.ro Team at NVIDIA GTC

Neu.ro is honored to be presenting the future of MLOps at NVIDIA GTC, April 12-16. NVIDIA has put together an amazing lineup of AI/ML leaders including Yoshua Bengio, Geoffrey Hinton, Yann LeCun and many others.

Neu.ro’s Director of Partnerships, Arthur McCallum, will present the future of MLOps in the Inception Track. His video presentation will be available for all attendees to view from April 12-16. Learn what MLOps can do for your AI strategy.

Many thanks to the NVIDIA Inception program for driving MLOps innovation.

GTC 2021 is a free conference – register today.

Arthur Mccallum

Neu.ro Team at NVIDIA GTC

Neu.ro is honored to be presenting the future of MLOps at NVIDIA GTC, April 12-16. NVIDIA has put together an amazing lineup of AI/ML leaders including Yoshua Bengio, Geoffrey Hinton, Yann LeCun and many others. Neu.ro’s Director of Partnerships, Arthur McCallum, will present the future of MLOps in the Inception Track. His video presentation will…

Designing a scalable face detection system in 2020

Introduction

Face detection is one of the core problems in Computer Vision that emerged way long before neural networks became widely used. You could think it had already been solved a long time ago. Github is full of toy repositories and even almost-production-ready frameworks on this subject. 

But reality turned out to be much more complex when we ourselves started solving the mask detection problem in the beginning of March 2020 due to COVID-19.

Why mask detection is important

There arose an urgent need for social responsibility around wearing face masks. People not wearing masks in public places – be it malls, convenience stores, or even streets – are putting others at risk on a daily basis.

This is why a system that will detect such cases be of great help in the times of quarantine. Such a system can also be integrated with various messengers and configured to send mask alert notifications.

For example, a store owner could escape penalties by monitoring the employees and raising public safety awareness among them. So they can use a mask detection system in their store and receive notifications in case some of their staff is detected not wearing a mask.  

The general solution

Mask detection is a downstream task of face detection. An intuitive way to solve it is to use an occlusion-aware face detection system, and then apply a classifier to its results that can tell whether a person is wearing a mask or not. 

In this short article, we want to make a quick recap of all the pitfalls we faced, of places in which we failed while developing a mask detection system, and of small technical decisions that have finally led us to success. 

We will additionally benchmark the existing open-source and commercial face/face-mask detection systems when it comes to accuracy and speed of operation.

The problems and the state of other solutions

We want to start by describing the problems we faced. As you may guess, the main problems of many machine learning tasks also apply to mask detection. Here are the most crucial ones we were met with.

Lack of diverse detection datasets

First of all, there are no large datasets with diverse collections of faces from real-world security cameras. 

Of course, WIDER FACE is a good starting point with its 400.000 faces. However, it turns out that adding 20.000 new faces from custom security camera sets helps increase the average precision of face detection from 0.5 to 0.8 on a diverse set of real-world security cameras (not only on those included in the training set). Specifically, we included data with streaming cracks, rotation, and other commonly found variations and distortions. This helped our model generalize better for usage on real-life data.

It is a common occurrence that custom datasets provide data that’s better suited to your specific needs than the open-source ones, which generally contributes to a higher quality of the final solution.

Lack of diverse classification datasets

There is also a huge demand for classification datasets for faces with and without masks. 

It seems that people didn’t need such sets before COVID-19 started. Some datasets were created after the pandemic began, but their quality is mostly poor. 

For example, MOXA3k, a dataset originally created for detection. Crops can be often used as a classification dataset, but MOXA3k tends to provide studio-quality street images and indoor photos, so this didn’t work. 

Also a lot of datasets (almost every one found on Kaggle) include mostly beautiful high-quality images of doctors with masked faces, some of which are obviously staged. Of course, this is far away from real-world data. A common approach of using synthetic data didn’t help us achieve good results on crops of security camera photos even after we tried utilizing some domain adaptation techniques. 

The task of classifying whether a person is or is not wearing a mask can get even trickier in some edge cases. Here are some examples:

Mask detectionFace mask detection
A person can be technically wearing a mask, but in a way that’s not really conducive to safety in public spaces.People can also make some very creative decisions at times.

Subpar open-source solutions

The next issue is a generally low quality of available open-source solutions. 

Specifically, they don’t detect all people in the frame and don’t achieve good accuracy of results.

We wanted to create a solution that makes more accurate predictions and tends to find and classify more faces (and we’re happy to say we’ve succeeded). 

Example 1

Face-Mask-Detection by chandrikadeb7face-mask-detection by  CindyalifiaOur solution
Face mask detection imageFace mask detection toolFace mask detection system

Example 2

Face-Mask-Detection by chandrikadeb7face-mask-detection by  CindyalifiaOur solution
Face mask detection serviceFace mask detection platformFace mask detection post

Low quality of real images

And finally, all the issues we already mentioned are further exacerbated by an often very low quality of images available in data from security cameras.

Here’s one extreme case of barely comprehensible real-life data:

mask detection system
Yes, it is actually a face! 

Our solution

Our eventual goal was to develop a system that is scalable both by the number of cameras and GPU workers, so making streaming frame-wise predictions was not the best option. Therefore, our pipeline is based on analyzing only a few shots from each camera every X seconds.

The model

A two-stage approach is used for detecting people with and without masks. 

We chose the version of the YOLOv4 architecture implemented on the darknet framework due to its high performance – both in terms of quality and inference time. It was trained on the WIDER FACE dataset and then fine-tuned on a custom dataset from a security camera. Faces cropped by the detector are used as an input for a classification network. On this stage, we used Efficientnet-b0 after training it on a mix of different datasets from the web. 

While designing the pipeline, we were not aimed at a high benchmark score, but rather at good reproducibility on production data and low inference time with high-resolution input images. Our pipeline has successfully achieved 11 FPS for 1024×1024 image size per frame without batch processing and 26 FPS with it.

We have also conducted some experiments using domain adaptation (Ganin &. Lempitsky, Unsupervised Domain Adaptation by Backpropagation, 2015) with synthetic data as source domain and security camera shots as target domain. This approach gave us results similar to those from a simple model, with the only significant increase being that in training time.

Production/Deployment/Platform

Our solution is hosted on the Neu.ro platform. This is an MLOps tool that allows for convenient and scalable model development and deployment in all conventional cloud services, e.g., AWS, GCP, Azure.

The platform consists of two parts:

  • Neu.ro Core is a resource orchestrator. It can be installed in a cloud or on-premise and combines computation capabilities, storage, and environments (Docker images) in one system with single sign-on authentication (SSO) and advanced permission management system.
  • Neu.ro Toolbox is a toolset integrator. It contains integrations with various open-source and commercial tools required for modern ML/AI development.


This is how we’ve set up the whole process:

  1. UI registers the RTSP camera streams.
  2. Grabber workers take snapshots from these streams and collect them to a cloud storage.
  3. Processor workers facilitate interaction between the storage, the model’s API, and the analyzer. They basically get predictions for all grabbed photos and send them further to the analyzer.
  4. The rest of the system triggers required events based on the model’s predictions (for example, notifies if there is a certain number of people without masks) and collects the corresponding statistics.

We use RabbitMQ for load balancing. Here’s a diagram that explains this in more detail:

Mask detection tool

Neu.ro provides highly tweakable presets that allow you to run jobs on even a fractional amount of cloud CPU. In our case, this functionality dramatically reduces the overall cost of uptime for the services. Particularly, we run all Grabber and Processor workers responsible for collecting and delivering the data to the detection API, on granular presets that use 20% of a single CPU unit.

Each worker is up with a single command, and the required amount of them can be replicated with a simple script of such kind:

for i in $(seq 1 $NUM_WORKERS); do
neuro run \\
–preset cpu-nano # Desired preset, e.g., with 0.2 CPU
–name worker-$i # Unique worker name
image:mask-detector-worker-image # Docker image of the worker
./app/run.py # Worker’s entrypoint
done

Results/Comparison

This article won’t be complete without the comparison of different approaches. 

Classification model

ROC AUCACCURACYPRECISIONRECALL
0.93780.95080.98630.9571

Detection model

mAPPRECISIONRECALL
0.9004030.910.85

We have achieved 93% classification accuracy on data from security street cameras. Our face detection model achieves 88% AP (while the original SotA RetinaFace has only 65% on the same data). In total, our pipeline has 73% mAP.

Here’s a good example of what our solution is capable of:
Mask detection tools

Conclusion

Getting into the issue of mask detection was quite a sobering and valuable experience. 

We were convinced that face detection in general is an extensively explored field, so finding a solution for a seemingly simple subset of face detection tasks would not require too deep of a delve. However, not finding a satisfactory solution even on the level of reliable face detection solutions, we realized that more work must be done on our side than initially expected. 

Exploring the field deeper, we were able to build a strong pipeline of face detection and classification to achieve some very decent results and compete with SotA performance. Having the Neu.ro platform at our disposal also helped at making the development and deployment processes as quick and convenient as possible.

face mask detection

Designing a scalable face detection system in 2020

Introduction Face detection is one of the core problems in Computer Vision that emerged way long before neural networks became widely used. You could think it had already been solved a long time ago. Github is full of toy repositories and even almost-production-ready frameworks on this subject.  But reality turned out to be much more…

Machine and Deep Learning Research by PhD

World-class researcher in Machine Learning, Deep Learning, Machine Learning Applications, Algorithms, Theoretical Computer Science. Sergey has authored more than 170 research papers, several books and patents, courses on machine learning, deep learning, and more. His bestselling “Deep Learning” book has become the standard source on deep learning in Russian. He also serves as Advisor for Samsung AI Lab and Head of Lab of the Steklov Institute of Mathematics at St. Petersburg.

Sergey Nikolenko

Machine and Deep Learning Research by PhD

World-class researcher in Machine Learning, Deep Learning, Machine Learning Applications, Algorithms, Theoretical Computer Science. Sergey has authored more than 170 research papers, several books and patents, courses on machine learning, deep learning, and more. His bestselling “Deep Learning” book has become the standard source on deep learning in Russian. He also serves as Advisor for…

Engineering Lab #1 — TEAM 3: “When PyTorch meets MLFlow” for Review Classification

Last week, Artyom Yushkovskyi, of Neu.ro’s MLOps engineering team joined us at the regular MLOps coffee sessions with MLOps.Community. His team recently successfully implemented a sentiment analysis solution for a large public dataset of restaurant reviews. Using an NLP approach, they were able to automate the classification of all such reviews as either positive or negative. Here are the key takeaways from this session (original article by MLOps community here).

Team 3 participants:

Project: Summarising What We Did

The initial task definition was quite open: each team was required to develop an ML solution using PyTorch for model training and MLflow for model tracking. The team members all had more or less deep knowledge in different areas of Machine Learning, from Data Science and underlying math, to infrastructure and ML tooling, including DS project management and enterprise system architecture. So, the most difficult problem for us was to choose a dataset 😜. At the end, we chose to use the Yelp Review dataset for training an NLP model for classifying the provided texts as either positive or negative reviews. The data included reviews on restaurants, museums, hospitals, etc., and the number of stars associated with each review (0–5). We modelled this task as a binary classification problem: determining whether the review was positive (>=3 stars) or negative (otherwise).

MLOps diagram

Figure 2. Metrics of Our Model in the MLflow Experiment Tracking UI

😎 From an MLOps perspective, there were several stages of the project’s evolution. First, we came up with a way of deploying the MLflow server on GCP and exposing it publicly. Also, we developed a nice Web UI where the user can write a review text and specify whether he or she considers this review to be positive or not, and then get the model’s response along with the statistics over all past requests. Having a Web UI talking to the model via REST API allowed us to decouple the front-end and back-end and parallelise development. Also, in order to decouple the logic of collecting model inference statistics in a database from the inference itself, we decided to implement a Model Proxy service with database access, and a Model Server exposing the model via a REST API. Thus, the Model Server could be seamlessly upgraded and replicated, if necessary. For the automatic model upgrade, we implemented another service called Model Operator, which constantly polls the state of the model registry in MLflow and, if the release model version has changed, it automatically re-deploys the Model Server.

😊 So, in the end we managed to build a pipeline with the following properties:

  • partial reproducibility: a manually triggered model training pipeline running in a remote environment,
  • model tracking: all model training metadata and artifacts are stored in an MLflow model registry deployed in GCP and exposed to the outside world,
  • model serving: a horizontally scalable REST API microservice for model inference balanced by a REST API proxy microservice that stores and serves some inference metadata,
  • automatic model deployment: the model server gets automatically re-deployed once the user changes a model’s tag in the MLflow model registry.

😢 Unfortunately, we didn’t have time to completely finish the model development cycle. Namely, we didn’t implement:

  • immutable training environment: training docker image is built once and used always
  • code versioning: we use of code as a snapshot, without involving a SVC
  • data versioning: we use dataset snapshot,
  • model lineage: can only be implemented  if usingcode and data versioning,
  • GitOps: automatically re-training the model once input has changed (code, data or parameters),
  • model testing before deployment
  • model monitoring and alerts (no hardware characteristics, health checks, data drift detection),
  • fancy ML tools (hyperparameter tuning, model explainability tools, etc.),
  • business logic features required for production (HTTPS, authentication & authorization, etc)

Technical takeaways

Though PyTorch (and Pytorch Lightning) is great and has tons of tutorials and examples, pickle for Deep Learning is still a pain. You need to dance around it for a while to save and load the model. We hope that the world will eventually come to a standardised solution with an easy UX for this process.

MLflow is an awesome tool for tracking your model development progress and storing model artifacts.

  • It can be easily deployed in Kubernetes and has a nice minimalistic and intuitive interface.
  • Though we couldn’t find any good solution for authentication and role-based access control, so this went out of the project scope.
  • We also found MLflow Model Serving too difficult to run in a few hours, mostly because of the lack of clear documentation.
  • In addition, we were surprised that we couldn’t find a solution for automatically deploying the model that gets the “Production” tag in MLflow UI. This could be a viable pattern, to deploy models directly from the MLflow Server dashboard, and could be a good addition to “MLflow core functionality”

Kubernetes is amazing! It’s terrifying at first, but terrific after a while. It enables you to deploy, scale, connect and persist your apps easily in a very clear and transparent way. However, we found it difficult to parametrize bare Kubernetes resource definitions (without using helm charts). We needed to pass a single or a few parameters to the yaml definition before applying it, and here are the ways we figured out how to tackle this problem:

  • Pack the set of k8s configuration files into a Helm chart (or use alternatives of Helm like kubegen). This is a jedi way to manage complex deployments as it gives you full flexibility, but it takes time to implement.
  • Use k8s resource ConfigMap to configure other resources. This approach is very easy to implement (just add a resource configuration), but is not flexible enough (for example, you can’t parametrize container images). However, we used it for parametrizing the Model Server configuration.
  • Another, the most “dirty” way to solve this problem, is by using the envsubst utility. Briefly, you process your configuration yaml with a tool that syntactically replaces all instances of specified environment variables with their actual values (see example for Model Operator). Any other sed-like tool would work here as well.

Self-management takeaways

Looking back, we can say that our team suffered from a lack of communication: we started discussing system design without having a single call to meet each other and understand each other’s feedback and wishes; we didn’t define a clear MVP and didn’t have a common understanding of what the final goal was. Nevertheless, we have learned many important truths in collaboration and project planning, namely:

  • Do not try to over-plan the project from the beginning (each step in the project plan at the beginning should cover a large piece of responsibility, rather than being too specific),
  • Use an iterative approach (define a clear MVP and the steps to achieve it, and then distribute tasks among the team members),
  • Respect project timing (avoid situations where you have to write code on the last night before the deadline). This is especially hard in teams working in their free time, after work!
mlops team members

Engineering Lab #1 — TEAM 3: “When PyTorch meets MLFlow” for Review Classification

Last week, Artyom Yushkovskyi, of Neu.ro’s MLOps engineering team joined us at the regular MLOps coffee sessions with MLOps.Community. His team recently successfully implemented a sentiment analysis solution for a large public dataset of restaurant reviews. Using an NLP approach, they were able to automate the classification of all such reviews as either positive or…

ML/DL Research Meets Real World Implementations

Manuel Morales, PhD, is Senior AI Advisor to Neu.ro, focusing on finance, banking and Fintech. Professor Morales brings extensive experience in both research and applied ML to the Neu.ro team. 

He serves as an Associate Professor of Financial & Actuarial Mathematics in the Department of Mathematics and Statistics at the University of Montreal, where his current research interests include Applied ML in banking and in responsible investment.

Formerly, Professor Morales served as Chief AI Scientist at the National Bank of Canada, where he led the scientific efforts of the bank’s strategy to leverage AI technologies across all verticals. While playing a leading role in the AI transformation initiative of the bank, he had the opportunity to work on a wide variety of projects from wealth management to retail banking applications.

Since 2018, Dr. Morales has also been the General Director of the FinML Network

The Fin-ML network or Machine Learning in Finance, was created to develop the global competitiveness of the Canadian finance sector by promoting and supporting the development and use of innovative information technology processes and solutions in the field of innovative machine learning technology in quantitative finance and financial business analytics. 

Manuel’s research interests include Representation Learning in banking, Deep Learning methods in high frequency market surveillance, explainability in the context of model governance and leveraging alternative data to assess environmental, societal and governance (ESG) factors in the context of responsible investment

Cover image neuro

ML/DL Research Meets Real World Implementations

Manuel Morales, PhD, is Senior AI Advisor to Neu.ro, focusing on finance, banking and Fintech. Professor Morales brings extensive experience in both research and applied ML to the Neu.ro team.  He serves as an Associate Professor of Financial & Actuarial Mathematics in the Department of Mathematics and Statistics at the University of Montreal, where his…

Neu.ro joins the AI Infrastructure Alliance

Neuro is humbled to be a Founding Member of the AI Infrastructure Alliance, working with 25 of the world’s most innovative companies to build the canonical tech stack for AI/ML.

Our partnerships in the Alliance will help to create a Canonical Stack for AI by driving strong engineering standards and creating seamless integration points between various layers of the AI infrastructure ecosystem. 

The AI and ML space currently lacks a standard set of tools and solutions, blocking data science teams from sharing their work and collaborating across the world. Rather, there is wild proliferation of proprietary, cloud lock-in solutions that benefit individual companies, but not the data scientists and engineers building the AI applications of today and tomorrow. The Alliance came together to help those data science teams break out of lock-in so they can build on top of a standardized, open platform that works across all of their environments.

“Time and again, I’ve seen development teams get excited about the potential of AI to transform their business and applications, only for them to get stopped dead in their tracks by a fragmented and confusing array of technologies with little to no integration,” said Dan Jefferies, Director of the AIIA. “Despite a massive surge of partial solutions, no single tool exists that lets teams leverage the true power and potential of AI. The AI Infrastructure Alliance will help create clarity in this confusing space by building a cohesive framework and bringing together leaders and innovators to help set the standard for how data science teams build models now, and into the future.”

The AI Infrastructure Alliance provides a range of benefits to member companies including opportunity to:

  • LEARN – Individual data scientists and data engineers get together in the real world and the digital world to connect, network, learn, share, and find jobs and collaboration partners.
  • CREATE – AI infrastructure software creators will work with other vendors to define engineering standards and drive adoption for their tools.
  • INTEGRATE – Solutions Integrators will help enterprises incorporate AI/ML into businesses and products.
  • ADOPT – Engineering and C-suite leaders will receive guidance and support as they look to make platform and tooling decisions for their teams.
  • NETWORK – Thought leaders, VCs, industry analysts and others can keep up to date with the latest industry trends, discover best practices, network and exchange information with their peers.

“Creating a place for top AI companies to work together will speed development of the infrastructure that businesses really need to make the promise of AI a reality,” said Joey Zwicker, Co-Founder of Pachyderm, a founding member of the AIIA. “As the Canonical Stack comes together, it will vastly reduce time to value for any company, in any industry, that’s leveraging AI across their business.”

Core founding members include Pachyderm, Seldon, Determined AI, Algorithmia, Tecton, ClearML by Allegro AI, Neu.ro, ZenML by Maiot, DAGsHub, TerminusDB, WhyLabs, YData, Superb AI, Valohai, Superwise.ai, cnvrg.io, Arize AI, CometML, Iguazio, UbiOps, and Fiddler. These companies have raised over $200M in collective venture capital funding from top firms including Andreessen Horowitz, Sequoia Capital, GV, Benchmark, NorWest Venture Partners, Madrona Venture Group and Gradient Ventures.

AI alliance

Neu.ro joins the AI Infrastructure Alliance

Neuro is humbled to be a Founding Member of the AI Infrastructure Alliance, working with 25 of the world’s most innovative companies to build the canonical tech stack for AI/ML. Our partnerships in the Alliance will help to create a Canonical Stack for AI by driving strong engineering standards and creating seamless integration points between various…

AI Transformation Takes a Village

Arthur is an AI Strategist focusing on Digital Transformation, hardware and cloud infrastructure. Working with industry leaders in the GPU, cloud services and ML software sectors, he is dedicated to helping startups and enterprises find the hardware and cloud solutions they need to responsibly scale AI Transformation. Arthur takes a hands-on approach to engagement of tech and business teams to facilitate the development of specific AI/ML use cases, educate stakeholders, select ML technologies, and integrate AI/ML solutions into business processes.

Arthur’s partnership mission is focused on growing Neu.ro’s ecosystem to include the world’s most innovative technology partners, establishing a network that provides our clients with all the resources, technologies and people required to effectively and safely scale their AI strategies.

Arthur is passionate about learning and is most happy in a room of smarter people developing solutions to hard problems.

Certifications:
MIT Sloan Executive Education – Digital Transformation
Columbia University Executive Education – Executive Data Scientist
AWS Fundamentals Specialization

ai village

AI Transformation Takes a Village

Arthur is an AI Strategist focusing on Digital Transformation, hardware and cloud infrastructure. Working with industry leaders in the GPU, cloud services and ML software sectors, he is dedicated to helping startups and enterprises find the hardware and cloud solutions they need to responsibly scale AI Transformation. Arthur takes a hands-on approach to engagement of…