Is OpenData.London a data provider or an analytics platform?−
It is both. We provide the infrastructure (London Data) for data engineering teams who need raw, zero-copy access to 6,000+ standardized datasets via Snowflake. Simultaneously, we provide the intelligence (Alliela AI) for business analysts who need to query that data using natural language, without writing SQL.
Is this data scraped? How do you handle provenance?+
We do not scrape. We act as an authorized aggregator for public sector endpoints. Every row of data in London Data is cryptographically hashed with a lineage stamp linking it back to the original government source file (e.g., HM Land Registry, Companies House). You get a fully audit-ready chain of custody.
What is the latency between the source and your database?+
It depends on the asset class. High-velocity feeds like TfL Transport and Grid Telemetry are streamed in near real-time (seconds). Statutory registers like Planning Applications and Corporate Insolvency are reconciled daily. We map the update frequency to the source authority's maximum publish rate.
Does our proprietary data ever leave our environment?+
Never. Because we operate as a Snowflake Native App, our code comes to your data. If you use Alliela AI to cross-reference your private loan book against our public data, that compute happens entirely inside your governance perimeter. We see zero PII and zero client data.
What if I need a niche dataset that isn't listed?+
We operate an "On-Demand Ingestion" pipeline. If a public dataset exists in London that we don't currently host, enterprise clients can request ingestion. Our engineering team will build the connector, normalize the schema, and add it to the London Data master graph within 48-72 hours.