An open-source multi-agent framework that automates data science pipelines with minimal input.
An open-source framework for developing autonomous data labeling agents that learn and adapt through iterative processes.
A comprehensive platform offering observability, evaluation, and debugging tools for building and optimizing large language model (LLM) applications.
An open-source LLM engineering platform offering observability, metrics, evaluations, and prompt management to debug and enhance large language model applications.
An AI observability and LLM evaluation platform that assists AI developers and data scientists in monitoring, troubleshooting, and enhancing the performance of machine learning models and large language models.
A Python library that provides data validation and settings management using Python type annotations.