Dataloop Integrates NVIDIA NIM to Accelerate Running and Deploying Generative AI

The Dataloop platform will integrate NVIDIA NIM inference microservices to help AI and developer teams seamlessly integrate generative AI into AI applications.

Dataloop, a provider of an enterprise-grade end-to-end AI development platform for building and deploying powerful AI applications, today announced its integration with NVIDIA NIM inference microservices.

Generative AI could automate up to 70% of all business activities, according to McKinsey, and yet the road to adoption for AI can be lengthy and full of potholes. In one survey, over half of the companies with an AI project in production are still in the pilot or proof-of-concept stages. Facilitating efficient AI development, backed by well-managed, optimized data, is the key to unlocking the value in AI. By embedding NVIDIA NIM into Dataloop’s AI development platform, the company is helping businesses to accelerate and realize generative AI concepts in production. The integration will let Dataloop users benefit from enhanced security and control of generative AI applications and data with self-hosted deployment of the latest AI models in their choice of infrastructure, whether on premises or in the cloud. It will also help accelerate the adoption and distribution of generative AI by providing Dataloop users with a seamless way to deploy NVIDIA optimized and accelerated models, simplifying the process of building innovative AI applications.

NVIDIA NIM, part of the NVIDIA AI Enterprise software platform, is a set of inference microservices that simplifies the deployment of generative AI models. It speeds time to value with pre-built, cloud-native microservices continuously maintained to deliver optimized inference on NVIDIA accelerated infrastructure, empowering developers with industry-standard APIs and tools tailored for enterprise environments. NIM also leverages enterprise-grade software with dedicated feature branches, rigorous validation processes, and support for deploying enterprise generative AI models across the cloud, data center, and workstations.

“Integrating NVIDIA NIM into the Dataloop platform marks a significant milestone in our commitment to providing our users with the best tools for AI development,” said Avi Yashar, CEO of Dataloop. “This collaboration with NVIDIA not only enhances our platform’s capabilities but also drives innovation in the AI industry by enabling enterprises to leverage cutting-edge AI technology with greater efficiency and ease.”

“Developers are seeking a fast and accessible path to building generative AI applications with the speed, accuracy, and security required for enterprise operations,” said Pat Lee, vice president of strategic enterprise partnerships at NVIDIA. “With NVIDIA NIM integrated in the Dataloop platform, developers can customize and implement sophisticated generative AI pipelines with greater ease.”

The Dataloop product offers a holistic developer experience and seamless usability with any NIM microservice. Users can deploy custom NIM microservices right into a pipeline up to ~100 times faster with a single click, integrating them smoothly into any AI solution and workflow. Common use cases for this integration include retrieval-augmented generation (RAG), large language model (LLM) fine-tuning, chatbots, reinforcement learning from human feedback (RLHF) workflows, and more. This powerful integration is poised to accelerate AI data pipelines and the deployment of generative AI models leveraged in various industries such as automotive, retail, media, and more.

Exit mobile version