Unified Schema
6,000+ sources normalized into a single object model. Cross-reference Transport, Land, and Corporate data without complex joins.
Live Synchronization
Event-driven ingestion pipelines. We mirror the official government publication rate—from millisecond sensor feeds to daily statutory registers.
Sovereign Governance
Data never leaves your perimeter. Zero-copy access via Snowflake Native Apps ensures full UK data residency and SOC 2 Type II compliance.
Our customers
Trusted by engineers, researchers, and analysts at
OpenAI
Anthropic
Google
MIT
Stanford
Jane Street
Citadel
HRT
Search across London's datasets
Query real-time city data to power your internal risk and valuation models.
Use London Data as the single query layer for transport, land, corporate, and environmental data—without stitching together dozens of CSVs and legacy APIs.
QueryAgent: internal_risk_engine
SELECT uprn, insolvency_risk, flood_zone, planning_status FROM london_data.portfolio_view WHERE uprn IN (...)
UPRN Insolvency Flood Planning
100021065123 Low Zone 1 Approved
100021065124 Elevated Zone 3 Refused
Agent: sync complete.
Analyze legal and planning text
Let Alliela AI read filings, contracts, and planning decisions for you.
Instead of skimming 200-page documents, ask natural-language questions and receive traced, citation-backed answers against the underlying source text.
QuestionAgent: alliela_ai
What new covenants were added to this 2024 planning decision in Tower Hamlets?
Item Summary
Condition 7 24/7 flood evacuation plan for basement units.
Condition 12 Noise monitoring for construction traffic on A13.
Agent: analysis complete.
Monitor portfolios continuously
Surveillance watches your portfolio for legal and environmental shocks.
Connect your loan book or asset sheet and receive alerts only when something material happens to a company, director, or property you care about.
Alert rulesEngine: surveillance
IF company IN portfolio AND event_type IN ('winding_up_petition','environment_agency_notice') THEN notify('risk_ops')
Time Entity Event
09:14 Canary Wharf Holdings plc New planning objection logged
09:42 East Dock Logistics Ltd Winding‑up petition filed
Engine: 2 new alerts delivered to risk_ops.
Clean and effortless APIs
Build your first request in seconds. Leave the data wrangling to us.
GET /api/london-dataportfolio_view
Run
import requests response = requests.get( "https://api.opendata.london/portfolio_view", params={ "uprn": "100021065124" } ) data = response.json()
RESPONSE200 OK
JSON
{ "uprn": "100021065124", "address": "East Dock Estate, E14 5AB", "planning_status": "Refused", "flood_zone": "3", "company_name": "East Dock Logistics Ltd", "insolvency_risk_score": 0.82, "events": [ "Planning refusal 2025-11-04", "Winding-up petition filed 2026-01-12" ] }
FAQ
Is OpenData.London a data provider or an analytics platform?
It is both. We provide the infrastructure (London Data) for data engineering teams who need raw, zero-copy access to 6,000+ standardized datasets via Snowflake. Simultaneously, we provide the intelligence (Alliela AI) for business analysts who need to query that data using natural language, without writing SQL.
Is this data scraped? How do you handle provenance?+
We do not scrape. We act as an authorized aggregator for public sector endpoints. Every row of data in London Data is cryptographically hashed with a lineage stamp linking it back to the original government source file (e.g., HM Land Registry, Companies House). You get a fully audit-ready chain of custody.
What is the latency between the source and your database?+
It depends on the asset class. High-velocity feeds like TfL Transport and Grid Telemetry are streamed in near real-time (seconds). Statutory registers like Planning Applications and Corporate Insolvency are reconciled daily. We map the update frequency to the source authority's maximum publish rate.
Does our proprietary data ever leave our environment?+
Never. Because we operate as a Snowflake Native App, our code comes to your data. If you use Alliela AI to cross-reference your private loan book against our public data, that compute happens entirely inside your governance perimeter. We see zero PII and zero client data.
What if I need a niche dataset that isn't listed?+
We operate an "On-Demand Ingestion" pipeline. If a public dataset exists in London that we don't currently host, enterprise clients can request ingestion. Our engineering team will build the connector, normalize the schema, and add it to the London Data master graph within 48-72 hours.