By: citybiz
October 17, 2025
Q&A with Ryan McElroy, Vice President of Technology of Hylaine: Solving the AI Data Readiness Problem
Ryan McElroy is the Vice President of Technology at Hylaine, a values-first technology consulting firm. He partners with Fortune 1000 companies to modernize systems, accelerate software delivery, and drive data accuracy for effective use of AI. He leads Hylaine’s technology vision and innovation team, driving thought leadership, developing cutting-edge solutions and amplifying the firm’s presence in key U.S. markets.
Over the years, he has advanced from consultant to VP, building a career that reflects both deep technical expertise and a strong understanding of how software-based technology drives business outcomes. With a career defined by curiosity and adaptability, Ryan is passionate about bridging strategy and technology—never confining himself to a single vertical or stack, but instead seeking innovative solutions that deliver measurable impact for clients.
Ryan’s experience spans application development, business intelligence, and DevOps, with much of his work rooted in the Microsoft ecosystem and industries such as insurance, healthcare, and manufacturing.
At Hylaine, we’ve seen the most success when companies think about data governance for AI in broader terms. When the governance model sets the rules of the road for the company’s data practices as a whole, it keeps the data strategy on track through monitoring, auditing, tracking KPIs (including metrics for ROI), and reporting.
Earlier in his career, Ryan served as a Software Engineer at SentryOne and a Consultant at CTS, gaining hands-on expertise that informs his pragmatic, business-focused approach to technology today.
Ryan is also a workshop facilitator and frequent speaker at leading industry events on innovation topics including AI Success Starts with Your Data, Data Reliability Engineering, The Return of the Datacenter, and Data DevOps.
Ryan earned his Bachelor’s Degree in Computer Software Engineering from Auburn University.
Watch the Hylaine Webinar, AI Readiness Starts with Your Data, and download Hylaine’s new white paper that provides 4 practical and actionable approaches to solve the AI data readiness problem.
Many AI initiatives fail due to lack of data readiness, with data stuck in silos or riddled with errors. From your experience, what are the most common roadblocks companies encounter when preparing data for AI?
The biggest barriers for preparing AI data are structural, technical, and organizational. We consistently see challenges in five areas: data access, siloed systems, data quality, governance, and the human factor.
Data access issues often stem from data that exists but can’t be used due to legal or security blocks—or because it’s housed in incompatible formats or legacy systems. Siloed data remains a long-standing problem, especially as enterprises spread operations across multiple cloud platforms. Even when data can be accessed, quality problems like inaccuracies, redundancies, and incomplete records undermine model accuracy and lead to hallucinations or bias.
Governance adds another layer of complexity. Companies must ensure compliance with privacy and avoid PII (Personally Identifiable Information), especially in sensitive industries like insurance and healthcare. Finally, the human factor—unclear business objectives, unrealistic expectations, and poor communication between IT and business teams—can derail AI projects.
Where should tech leaders focus first to ensure their AI initiatives succeed?
We talk about AI projects failing in Q1. Let’s cut to the chase with this question.
Tech leaders should begin by building a mature, AI-ready data infrastructure. That includes investing in data engineering tools and talent. It also means modernizing data architectures to handle additional ways of collecting, processing, and storing data at the scale and velocity AI requires. Often, traditional data architecture is not optimized for this. Companies that have both data warehouses (with curated, reliable, and structured data sets) and data lakes (built to accommodate diverse data types) have a head start.
In parallel, leaders should establish data reliability engineering (DRE) as a core capability in the data organization to ensure ongoing data quality, availability, and observability. These same capabilities streamline testing and root cause analysis when errors occur in data movement.
Once basic infrastructure and high quality architecture is in place, they can adopt modern tools for data integration. These can take the form of highly managed ELT (Extract, Load, Transform) tools such as FiveTran or Airbyte, or cloud-native ETL (Extract, Transform, Load) platforms like Azure Data Factory or Databricks. Finally, it’s critical to define strong governance frameworks early—so AI systems can access compliant, trustworthy data from the start.
What lessons can companies take from successes like American Express and Astra Zeneca when building AI systems in highly regulated or complex industries?
Both American Express and Astra Zeneca show that investment in robust, AI-ready data architecture builds the foundation for reliable, repeatable processes to develop AI systems that retain feedback, remember context, and improve over time—producing positive return on investment (ROI). American Express built a system capable of analyzing transactions from millions of cardholders and merchants in real time to spot patterns of potential fraud—because their data architecture could support continuous learning and feedback loops. Astra Zeneca’s investment in a strong data foundation enables its AI to inform drug discovery, clinical trial design, and improve efficiency of regulatory submissions—all within strict compliance boundaries.
The lesson is clear: AI success in regulated industries depends on governance as much as innovation. Data must be clean, secure, and traceable, and governance must be built into the architecture from day one. When organizations design their data systems with compliance, transparency, and auditability in mind, they can confidently scale AI and demonstrate measurable business outcomes.
How can IT and business teams work together to build trust in AI systems and encourage adoption across the organization?
Trust comes from transparency, explainability, and collaboration. IT teams must communicate clearly what AI can and can’t do—and show the business how results are generated. That’s where explainability becomes essential; users must understand how AI arrives at its conclusions to feel confident using it.
We’ve found the most successful AI projects are led by a trio of champions: an executive sponsor, the business process owner, and a technical lead. Together, they ensure alignment across strategy, outcomes, and execution.
Another approach for organizations just starting out is to select user groups that are both already pro-AI and vocal as initial targets for projects. This can reduce the risk in that first project and ensure that feedback comes quickly and cleanly before the next one.
Given the pressure from top executives to adopt AI quickly, how should tech leaders balance speed with building a sustainable, trustworthy data infrastructure that supports adoption, explainability, and long-term ROI?
The temptation to move fast often leads to wasted pilots and disappointing results. To balance speed and trustworthy data, resist the urge to chase short-term wins without a strong foundation. It’s not always sexy, but data reliability engineering (DRE) should be a core capability in the data organization. Sometimes it is just building a new pipeline that’s moving data from one location to another and cleaning it at the same time. This is work that’s been going on for a long time for different reasons, but it’s just more urgent now, and having that expertise is important.
DRE provides strategies and processes for ensuring data quality, availability, observability, testing and root cause analysis of errors.
Data readiness ensures that you have the necessary ingredients to deliver an AI implementation. But AI success requires adoption—if employees don’t use the agentic tool to perform a task or customers don’t find an AI chatbot useful, the investment will be wasted. As an MIT study shows, it’s repeatable and scalable adoption, not one-off successes, that drive sustained ROI from AI.
How can companies create governance frameworks that both protect the business and accelerate AI innovation?
Strong governance is not a brake on innovation—it’s what allows AI to scale safely. A good governance framework defines clear rules for data use, protects personal data, and prevents unauthorized use of proprietary content or data.
At Hylaine, we’ve seen the most success when companies think about data governance for AI in broader terms. When the governance model sets the rules of the road for the company’s data practices as a whole, it keeps the data strategy on track through monitoring, auditing, tracking KPIs (including metrics for ROI), and reporting. We often recommend creating a governance council that includes representatives from different parts of the business, as well as IT experts. Once you’ve created a data governance framework, it’s important to note that AI governance is still a separate effort, even if it’s tightly connected with overall data governance.
To speed AI innovation, we advise implementing technical safeguards such as tokenizing real data, and automating alerts for PII exposure. Off-the-shelf tools like Perforce’s Delphix can support “continuous compliance” without slowing development. When governance is built into data operations, not bolted on later, organizations gain the freedom to innovate confidently.
Many organizations underestimate the human and organizational challenges of AI. What steps can leaders take to close the skills gap and sustain long-term AI success?
Technology alone doesn’t deliver ROI—people do. Many AI projects falter because the data teams responsible for implementation lack experience with modern cloud infrastructure, data engineering, and DevOps. To close that gap, companies can train or hire new talent (already in short supply) or contract outside experts. One effective way to train employees is to create hybrid teams that pair internal staff with external experts. Working as co-equals with outside experts lets your team use their deep business knowledge and the specialists help move to AI data preparedness faster. Once teams learn the “new way” of working with data, they rarely want to revert to outdated methods.
When employees understand both the data and the reasoning behind AI outputs, adoption follows naturally.
Finally, leaders need to nurture a culture of trust and curiosity around AI. When employees understand how AI supports their work, and can see its outputs explained clearly, they’re more likely to adopt it, improve it, and drive the sustained ROI that most organizations are chasing.

The post Q&A with Ryan McElroy, Vice President of Technology of Hylaine: Solving the AI Data Readiness Problem appeared first on citybiz.
This contant was orignally distributed by citybiz. Blockchain Registration, Verification & Enhancement provided by NewsRamp™. The source URL for this press release is Q&A with Ryan McElroy, Vice President of Technology of Hylaine: Solving the AI Data Readiness Problem.