DA-bench

Visual Benchmark for Data Analytics AI Agents

What is DA-bench?

DA-bench is the “Data Analyst Benchmark” - it is a series of questions that you would expect Data Analysts to be able to solve.

This benchmark was created to help us test Unsupervised and other data tools against real-world problems, so we can understand the strengths and weaknesses of various approaches to automating analytics.

DA-bench is a visual benchmark, you can see both the score and videos of how every tool performs on every test. See some examples below:

Leaderboard

Tool Date Tested Score
Databricks IQ

Co-pilot Use your Data Warehouse

November 18, 2024 74.9% VIEW DETAILS
Unsupervised

Full Agent Use your Data Warehouse

November 14, 2024 73.5% VIEW DETAILS
MicroStrategy Auto Answers

Full Agent Use your Data Warehouse

November 18, 2024 59.5% VIEW DETAILS
Amazon Q in QuickSight

Co-pilot Use your Data Warehouse

November 13, 2024 46.2% VIEW DETAILS
Einstein for Tableau

Co-pilot Use your Data Warehouse

November 14, 2024 45.4% VIEW DETAILS
Qlik Sense Insight Advisor

Natural Language Search Use your Data Warehouse

November 12, 2024 43.4% VIEW DETAILS
ChatGPT Data Analyst

Full Agent Use Data In Memory

November 19, 2024 41.2% VIEW DETAILS
Snowflake Copilot

Co-pilot Use your Data Warehouse

November 19, 2024 40.2% VIEW DETAILS
ThoughtSpot Sage

Natural Language Search Use your Data Warehouse

November 14, 2024 40.0% VIEW DETAILS
Julius

Full Agent Use Data In Memory

November 7, 2024 34.8% VIEW DETAILS
BigQuery

Co-pilot Use your Data Warehouse

November 20, 2024 31.5% VIEW DETAILS
IBM Cognos Assistant AI

Natural Language Search Use Data In Memory

November 4, 2024 12.8% VIEW DETAILS
SAP Just Ask

Natural Language Search Use your Data Warehouse

November 15, 2024 6.7% VIEW DETAILS
Snowflake Cortex Analyst

Not Yet Tested
Databricks Genie

Not Yet Tested
Google Gemini Code Assist

Not Yet Tested

About

The Data Analyst Benchmark is a collection of datasets and prompts that can be used to test how automated analytics tools handle common data analyst tasks.

We use this information to help us prioritize work to improve our AI. We are making it available publicly to help other companies improve their tools and to help users evaluate which tools are relevant to their problems.

DA-bench currently tests dozens of prompts across 9 categories. Evaluation is performed by manual testing by a third-party, scores and videos of test results are displayed on dabench.com.

More Info

DA-bench is maintained by Unsupervised.

Suggestions and contributions are welcome on the DA-bench Github Repository.