By: citybiz
July 14, 2025
Q&A with Amit Sharma, Founder & CEO at CData
Founder and CEO of CData Software, Amit Sharma defines the CData technical platform and business strategy. His leadership has guided CData Software’s rise from a startup, to a leading provider of data access and connectivity solutions. Amit holds an MS in Computer Networking from the North Carolina State University and an MBA from the Duke Fuqua School of Business.
The AI revolution is creating massive demand for what Gartner calls “AI-Ready Data,” and that’s playing directly into our strengths.
Can you walk us through your journey before founding CData?
I started my career at Infosys Technologies in India, which gave me exposure to large-scale enterprise technology challenges. That experience taught me how established organizations approach complex technical problems, but I was drawn to the innovation happening in smaller, more agile environments.
I later joined a network security startup in the US. The contrast between working at a massive IT services company versus a high-energy startup was eye-opening. I realized I thrived in environments where you could move quickly, make decisions with limited resources, and directly impact the product direction.
My path eventually led me to /n software, where I worked alongside my future co-founders developing data connectivity solutions. We spent years building technology and searching for the right product-market fit. We had developed solid technology, but getting market traction during economic uncertainty was challenging.
The breakthrough came when we realized our data connectivity components were the strongest part of our offering. Organizations were struggling to connect their systems – whether databases, applications, or cloud services – and there wasn’t an elegant, standards-based solution. Instead of trying to be everything to everyone, we decided to spin CData out of /n software and double down on solving one problem really well: making it simple for organizations to connect their data sources using familiar standards like SQL.
That focused approach became the foundation for what CData is today – helping thousands of organizations access their data anywhere, without the complexity that traditionally came with data integration projects.
What was the “spark” that motivated you to build CData’s data connectivity platform?
During my time at /n software, we kept seeing the same pattern across different clients. Companies held valuable data across systems like CRM, ERP, databases, and cloud apps, but integrating them was often complex and time-consuming.IT teams were spending weeks or months building custom integrations for what should be simple data access requests.
The frustrating part was that most of these systems had APIs and the data was technically accessible, but there was no standardized way to connect them. Every integration was a custom project requiring specialized knowledge of each system’s unique interface. Business users who just wanted to analyze their sales data alongside their marketing metrics were stuck waiting for technical resources that were always stretched thin.
We realized that if we could create a universal approach – using SQL as the common language that most people already understood – we could eliminate this friction entirely. Instead of learning dozens of different APIs, users could simply write a SQL query to access any data source, whether it was Salesforce, Oracle, or a REST API.
That “aha moment” was recognizing that the problem wasn’t the complexity of individual systems, but the lack of a unified, standards-based approach to data connectivity. Once we focused on building that bridge – making any data source look and behave like a familiar database – everything clicked into place. Organizations could finally access their data on their terms, without the traditional integration headaches.
Data connectivity is a complex challenge. How does CData set itself apart?
We stand out from other data connectivity providers through a unique blend of simplicity, broad data source coverage, and a strong focus on real-time, seamless integration. Our connectors are the result of years of development and real-world use by both direct customers and OEM partners, offering depth and reliability that’s hard to match.
Unlike many competitors, our solutions support connectivity to over 250+ data sources, including SaaS, NoSQL, and Big Data platforms, with minimal setup and no coding required. This empowers both technical and non-technical users to integrate and access data in real time without complex configurations.
A key differentiator is our support for both data movement and live data access. With CData Sync, users can replicate data into any database or warehouse, while CData Connect Cloud and Virtuality enable live data access across the applications and platforms businesses rely on daily. Our platform is also built with enterprise-grade scalability and security. Features like centralized auditing, encryption, and compliance with data privacy regulations ensure that businesses can confidently trust us with their most sensitive data. This powerful combination of flexibility, real-time access, and robust security positions us as a leader in the data integration space.
How do you balance building new connectors versus optimizing the platform’s core functionality?
We’ve learned that the balance isn’t static – it shifts based on market needs and where we see the biggest pain points. Early on, breadth of connectors was critical because organizations needed to connect to their existing systems before they could realize any value from the platform. You can’t optimize performance on a connector that doesn’t exist yet.But as we’ve matured, we’ve become much more strategic about this balance. We now have data on which connectors are most heavily used, which systems our customers are prioritizing, and where performance bottlenecks actually impact their daily workflows. That guides our roadmap decisions.For new connectors, we focus on emerging platforms and systems where our customers are asking for support – whether that’s a new SaaS application that’s gaining traction or cloud services that are becoming standard in certain industries. But we’re not just building connectors to check boxes. Each new connector needs to solve a real connectivity challenge for a meaningful segment of our user base.
The optimization work has become even more critical as we’ve built higher-value solutions on top of our connector foundation – like CData Sync for automated data replication and our cloud platform offerings. These products depend on our connectors performing reliably at scale, so when we see patterns like certain types of queries hitting performance walls or specific connectors struggling with large datasets, that becomes an immediate priority.What’s exciting is that our connector improvements now have a multiplier effect. When we optimize a Salesforce connector, it doesn’t just benefit direct users – it improves performance across CData Sync, our cloud services, and any integrated platform that relies on that connection.
The key insight we’ve had is that a mediocre connector to a critical system is often more valuable than a perfect connector to a niche platform, but as we build more sophisticated solutions on top of our connectivity layer, the quality bar keeps rising. We’d rather have reliable, performant access to the systems our customers depend on every day than an exhaustive catalog of rarely-used connections.It’s about building what matters most, then making it work exceptionally well across our entire platform.
CData powers a wide range of applications behind the scenes. Can you tell us more about your embedded connectivity business and how it has evolved?
What started as individual developers downloading our connectors has evolved into deep partnerships with some of the world’s leading technology companies. These organizations recognize that building and maintaining hundreds of data connectors isn’t their core competency – they’d rather focus on their platform’s unique value while leveraging our connectivity expertise.
We’re seeing this play out across different types of partnerships. With Palantir, for example, they’re embedding our connectivity to help their enterprise customers bring data from various sources into their analytics platform without the traditional integration overhead. It’s the same story with Google Cloud – they want to focus on their cloud infrastructure and AI capabilities, not on building custom connectors for every SaaS application their customers use.
Our relationship with Salesforce has been particularly transformative. They’re leveraging CData connectors across multiple flagship products – Tableau, Tableau CRM, and now Salesforce Data Cloud. With Data Cloud specifically, we’re enabling Salesforce users to ingest data from virtually any system into their unified customer platform, creating that comprehensive view of the customer journey that’s so critical for modern businesses.
The SAP partnership shows how even established enterprise software companies are recognizing the value of standardized connectivity. Rather than building their own data integration solutions from scratch, they’re leveraging our connector library to give their customers immediate access to hundreds of data sources.
What’s evolved significantly is the sophistication of these integrations. Early on, partners might embed a handful of our most popular connectors. Now we’re seeing deeper integrations where our entire connectivity platform becomes a core component of their data strategy. Partners can offer their users access to virtually any data source while we handle the complexity of maintaining those connections as APIs change and new platforms emerge.
The embedded model also creates a flywheel effect – as more platforms integrate our connectivity, we get better insights into real-world usage patterns, which helps us prioritize both new connectors and performance optimizations that benefit the entire ecosystem.
It’s validation that connectivity has become infrastructure – something you want to be reliable and comprehensive, but not something you want to build yourself.
CData recently released MCP – what was the driving force to build it?
Generative AI has revolutionized our lives by applying its capabilities to public data. But it has been slow to bring that revolution to the workplace because of the difficulty of connecting LLMs to enterprise data. CData MCP Servers are a new way to securely connect any LLM with Model Context Protocol support to over 350 enterprise data sources instantly, without custom wrappers or risky workarounds. With direct, standardized, SQL-based access, AI tools can finally talk fluently to business data.
There’s a lot of buzz around MCP. While other MCP offerings focus on querying data or driving action in a single source-system, CData MCP Servers are ideal for LLM-powered data analysis across multiple data sources
With AI-driven analytics on the rise, how is CData adapting?
The challenge is that AI models are only as effective as the data they can access, yet most enterprise data remains locked in silos. Organizations might have customer data in Salesforce, financial data in NetSuite, operational data in various databases, and marketing data across multiple platforms. To build meaningful AI applications, they need all of that data to be accessible and combinable – what Gartner and the broader industry are calling “AI-Ready Data.”
What’s exciting is that our connectivity platform is perfectly positioned for this shift. AI-Ready Data isn’t just about having access to information – it’s about having real-time, standardized access that AI applications can actually consume and analyze. Instead of organizations having to build complex data pipelines to feed AI models, they can use our connectors to give AI applications direct access to any data source in a format that’s immediately usable.
Whether it’s a machine learning model that needs to analyze customer behavior patterns across multiple touchpoints or an AI-powered dashboard that combines sales, marketing, and support data, our connectivity layer ensures that data becomes AI-Ready from the moment it’s accessed.
We’re also seeing AI change how people interact with data itself. Business users who might never write SQL are now asking natural language questions that get translated into queries behind the scenes. But those queries still need to access AI-Ready Data from dozens of different systems, which is where our standardized connectivity approach becomes essential.
The partnerships we have with companies like Salesforce Data Cloud and Google Cloud are particularly relevant here. These platforms are building sophisticated AI capabilities, but they depend on our connectivity to ensure their AI models can actually access the breadth of AI-Ready Data they need to be effective.
Rather than seeing AI as something separate from connectivity, we view it as validation that making data AI-Ready through seamless access is more important than ever. The more sophisticated these AI applications become, the more they need reliable, real-time access to comprehensive, immediately consumable data sets – and that’s exactly what we enable.
Where do you see CData in the next 3–5 years?
We’re at an inflection point where data connectivity is becoming fundamental infrastructure, and I see CData evolving to meet that reality.
In the next 3-5 years, I expect we’ll be powering data access for thousands more organizations across multiple touchpoints. Sometimes we’ll be the direct solution – where data teams choose CData specifically because they need our connectivity capabilities. Other times we’ll be embedded within the platforms they’re already using, like Salesforce Data Cloud or Google Cloud, where our connectivity becomes part of their broader data strategy.
Our embedded business will continue to be a major growth driver. We’re already seeing this with partners like Salesforce Data Cloud, Google Cloud, Palantir, and SAP, but I think every major platform will eventually need comprehensive connectivity capabilities. Rather than each company building their own connector ecosystem, they’ll leverage ours so they can focus on their core differentiators.
The AI revolution is creating massive demand for what Gartner calls “AI-Ready Data,” and that’s playing directly into our strengths. Organizations building AI applications need access to all their data sources in real-time, and they need it to be reliable and standardized. I see CData becoming the connectivity layer that makes enterprise AI actually feasible at scale.
We’re also expanding our cloud platform offerings beyond just connectors. CData Sync and our other cloud services show how we can deliver higher-value solutions built on our connectivity foundation. I expect we’ll continue moving up the stack while maintaining our core strength in universal data access.
Geographically, there’s a huge opportunity in international markets where organizations are dealing with the same data silos we’ve solved in North America. The principles of standardized connectivity are universal.
Ultimately, I see CData becoming the connectivity standard that enables the next generation of data-driven applications – whether that’s AI, analytics, or technologies we haven’t even imagined yet. Our job is to make sure accessing any data source is as simple as writing a SQL query, regardless of what’s built on top of that foundation.
The post Q&A with Amit Sharma, Founder & CEO at CData appeared first on citybiz.
This contant was orignally distributed by citybiz. Blockchain Registration, Verification & Enhancement provided by NewsRamp™. The source URL for this press release is Q&A with Amit Sharma, Founder & CEO at CData.