Benchmarks Inform Research, Pilots Prove Value
FieldSpace validates perception and plans with a physics-based world model, then prunes infeasible trajectories. In pilots we measure: false positive reduction, trajectory rejection rates for unsafe motion, decision latency, GPU hours per mile or per task, degraded comms task completion.
What We Measure in Pilots
These KPIs reflect our role as a physics layer beside your perception and control stack.
False Positive Reduction
How many perception errors does FieldSpace catch? Measured as % reduction in unsafe detections that would have triggered bad plans.
Typical pilot results:
35-50%
Decision Latency
Added latency from perception output to constrained trajectory. Critical for control loop timing. Measured in ms (p50, p95, p99).
Typical pilot results:
<5ms
Compute Usage
GPU hours per mile or per task. FieldSpace prunes infeasible futures early, reducing need for heavy vision processing.
Typical pilot results:
20-30%
savings
Trajectory Rejection Rate
How often does FieldSpace reject unsafe motion before it reaches your controller? High rate = better safety envelope.
Typical pilot results:
15-25%
of candidate plans
Degraded Comms Resilience
Task completion rate when network/perception bandwidth is reduced. FieldSpace continues locally with short-horizon prediction.
Typical pilot results:
95%+
safe operation maintained
Integration Time
Days from ROS/gRPC handshake to first on-vehicle or robot trial. Fast integration = faster time to value.
Typical pilot results:
5-10
days
How We Validate
Our pilot program measures FieldSpace impact at each stage, from log replay to on-vehicle trials.
Log Replay
Run FieldSpace on your recorded data. Baseline KPI measurement without on-vehicle risk.
Week 1 of pilot
Simulation
Optional: validate in your sim environment with edge case scenarios.
Week 1-2 (optional)
Closed Course
On-vehicle or robot trials with FieldSpace running in parallel to your stack. A/B testing with safety driver.
Week 2-3 of pilot
KPI Report
Comprehensive report on all KPIs with degraded-comms scenarios. Go/no-go decision.
Week 3 final deliverable
Research & Development Benchmarks
These academic benchmarks inform our R&D, but pilot KPIs prove value in your environment.
Internal Performance Metrics
Traffic Field Computation (PDE Solver)
Perception Validation
Note: These are internal R&D benchmarks on reference hardware. Your pilot will measure real-world KPIs on your platform and data.
Validate FieldSpace on Your Data
Start a 4-week pilot. Week 1: log replay and baseline KPIs. Week 2-3: on-vehicle trials and A/B testing. Week 3: final report and go/no-go decision.
KPI-driven validation. Integrates with ROS 1/2, gRPC. Built for autonomy teams.