Real engagements. Real technology. Every metric on this page comes from an actual project delivered by the Cubed Analytics team. No made-up numbers.
A national retailer with 200+ stores had finance, operations, and commercial teams each maintaining their own Excel-based reporting. Month-end was a multi-day manual exercise. Data was inconsistent, definitions clashed, and leadership had no single version of the truth. We designed a governed semantic layer on top of their Azure data warehouse and built a Power BI platform with executive, operational, and self-service layers.
A financial services firm had 40+ data sources, each with its own hand-crafted ADF pipeline. Onboarding a new source took 2–3 weeks of engineering. We designed and built a metadata-driven ingestion framework on Azure — all pipeline configuration lives in a central metadata store. A single generic ADF pipeline reads configuration at runtime. Adding a new source is a matter of inserting rows, not writing code.
A capital markets firm needed a platform processing both high-volume batch data from legacy systems and real-time event streams from Kafka and Azure EventHub. Their legacy Oracle warehouse couldn't support streaming. We built a medallion lakehouse on Azure: ADF handles batch ingestion via our metadata framework, while Spark Streaming on Databricks consumes from Kafka and EventHub in near real-time. Unity Catalog governs all data assets.
An NHS Trust was embarking on a major digital transformation programme but had no clear picture of their data landscape — 12 source systems, no agreed standards, no data catalogue, and significant duplication. We conducted a full current-state review, ran stakeholder workshops, and designed a cloud-native target architecture on Azure. The resulting blueprint and prioritised roadmap is now used as the binding reference by all technology suppliers on the programme.
A precision manufacturer was running their entire data operation on an ageing on-premise SQL Server estate — 8 instances, 240+ databases, complex stored procedure dependencies, and SSIS packages. We designed a migration strategy using Azure Database Migration Service, built automated validation and reconciliation scripts comparing source and target at every stage, and maintained parallel running for 6 weeks post-cutover.
Following an acquisition, an e-commerce group needed to consolidate the acquired company's AWS-based platform (Redshift, S3, Glue, Lambda) into their existing Azure environment. The two platforms had fundamentally different architectural patterns, 14 Glue ETL jobs, and divergent data models. We mapped every AWS service to its Azure equivalent, rewrote the Glue jobs as parameterised ADF pipelines using our metadata framework, and ran both platforms in parallel for 8 weeks.
A financial services group had invested in an Azure Databricks platform but their existing team — mostly SQL Server and SSIS specialists — lacked the PySpark, Databricks, and cloud-native skills to operate and extend it. We designed a bespoke 6-month programme grounded in their actual platform and real data problems. Each sprint, engineers applied new skills to real platform tasks, with our coaches reviewing work and pairing on complex problems.