KNEW — Knowledge Network of Every Witness
Proposal ID: 101306747 · Lisbon, Portugal
Open Live App Download proposal
Post-submission update

KNEW v2 — Production-capable. Live. Scaling.

Since the EIC submission, KNEW has progressed from validated prototype to a production-capable global news intelligence platform. We redesigned ingestion and the AI pipeline, expanded coverage and languages, and hardened the system for enterprise and EU-wide deployment.

What changed since submission (executive)

KNEW is no longer a feasibility experiment — it is a working platform delivering value to early users. We moved the program from building a prototype to scaling reliability and reducing unit cost to enable broad adoption.

Product maturity

Full end-to-end engine: hybrid ingestion (RSS + crawlers + conditional APIs), 8-layer AI processing (bias, sentiment, fact-check, summarization, categorization), geocoding & ownership mapping, and a live UI in production.

Architecture & reliability

Rebuilt ingestion to a fault-tolerant, async worker queue with Redis-backed caching & TTLs, automated retries, and observability (metrics, logs, SLI/SLOs) for production stability.

Cost & scale

New hybrid strategy reduces per-article inference cost by an order of magnitude: RSS + local crawlers for coverage, conditional API calls for high-value sources, and caching to avoid repeated inference.

Technical progress (high-level)
Traction & validation

Early users

~100 active testers with daily/weekly usage; qualitative feedback indicates clarity & relevance improvements over generic aggregators.

Willingness to pay

High intent signals for a professional subscription (~€9/month) among power users (journalists, analysts).

Evidence

Proposal PDFs (Part B sections) are attached in the archive for audit: download or request the full docs in the interview.

How EIC funding will be used

Scale ingestion

Engineer hires for crawler & ops, regional collectors in 25+ countries, and scalable collectors (3–6 months).

AI & cost-efficiency

Self-hosted inference pilots, model distillation, caching & batching to reduce inference cost and improve latency.

Enterprise & compliance

GDPR, auditability, SLAs, and enterprise API work to onboard newsrooms, institutions, and research partners.

12-month roadmap (high level)

0–3 months

Crawler expansion pilots, finalize self-hosted inference PoC, observability & SRE hardening.

3–9 months

Scale to 25+ countries, pilot enterprise integrations, tighten compliance processes.

9–12 months

Launch paid tiers, onboard first enterprise customers, begin EU-wide deployments.

Key metrics & targets

Current

~100 early users · Live ingestion & AI pipeline · Daily test traffic.

Targets

Latency < 2s for priority feeds · Cost per 1k articles ↓ 70% · 10k DAU in 12 months.

Impact

Reduced misinformation exposure, faster situational awareness for journalists & institutions, improved media transparency across regions.

Email liaison
Proposal ID: 101306747