Mission Context

As part of a European research project, I contributed to a two-month research mission conducted at the University of Tokyo, focusing on road surface friction estimation using only a forward-facing camera.

This work led to the publication of a paper, ProposalTakumi 2023 , at IEEE AIM 2023, in collaboration with the University of Tokyo and UTC. The system uses image segmentation, confidence modeling, geometric projection, and accumulation into a global surface grid map.

Video 1 - Video demo: road friction estimation.

Demo: Video 1: Video demo: road friction estimation. .

The experiment was conducted with the Fujimoto Lab (University of Tokyo) as part of the OWheel collaboration. The laboratory specializes in autonomous vehicles and control systems, particularly for electric vehicles and in-wheel motor platforms.

Fig. 1 - Laboratories of the University of Tokyo, Kashiwa campus.

The mission aimed to detect road surface types in real time from a front-facing camera and provide this information to a traction controller.

Image Processing Pipeline

Steps

  1. ROI selection
  2. Color distance computation
  3. Binary thresholding
  4. Image downsampling
  5. Confidence mask application
Fig. 2 - Five-step processing pipeline from RGB image to pixel-level friction prediction.

The image classifier generates:

  • ROAD (normal traction)
  • SLIPPERY (blue polymer)
  • UNKNOWN
Fig. 3 - Left: original image. Middle: binary mask. Right: confidence-weighted prediction.

Projection and Grid Map Generation

Camera to World Projection

Each pixel $(u, v)$ is projected into 3D space using:

$$ x_c = \frac{(u - c_x)}{f_x} \cdot z(v) \quad \text{Eq (1)} $$
$$ y_c = \frac{(v - c_y)}{f_y} \cdot z(v) \quad \text{Eq (2)} $$

where $z(v)$ is the estimated distance from camera to ground, obtained via linear regression.

Pose correction is performed using GPS + IMU + Kalman filter to convert pixel coordinates into world coordinates for insertion into a 2D grid.

Fig. 4 - Pipeline: projecting detection to grid map using calibrated camera + GPS pose.

Accumulation and Trust Masking

Each grid cell is updated over multiple frames to improve stability. Confidence is reduced for distant pixels due to higher projection error and reduced color reliability.

Fig. 5 - Trust mask improves reliability in central image zones.

Grid Output and Road Profile

The global grid stores accumulated surface classifications over time, with two buffers:

  • $B_{ij}$: slippery surface class (blue-sheet)
  • $R_{ij}$: normal road class

Update rule:

$$ \begin{cases} B_{ij} \leftarrow B_{ij} + \hat{p}_{uv}, & \text{if } \hat{p}_{uv} > 0 \\ R_{ij} \leftarrow R_{ij} + \hat{p}_{uv}, & \text{if } \hat{p}_{uv} < 0 \end{cases} \quad \text{Eq (3)} $$

Final grid value:

$$ G_{ij} = \begin{cases} \frac{B_{ij} + R_{ij}}{|B_{ij}| + |R_{ij}|}, & \text{if } |B_{ij}| + |R_{ij}| \neq 0 \\ 0, & \text{else} \end{cases} \quad \text{Eq (4)} $$

This grid is used to build friction profiles for left and right wheels.

Fig. 6 - Example of final friction profile used by controller.

Runtime Optimization

Image resolution reduction was crucial. Projection runtime decreased by over $10\times$ when images were resized before processing.

Fig. 7 - With downsampling: efficient per-frame processing time.

Experimental Setup

  • EV testbed with onboard RGB camera and GPS
  • Blue polymer simulating slippery surfaces
  • Kalman filtering to improve GPS positioning
  • Real-time fusion of camera and pose data
Fig. 8 - Custom tool for GPS + Camera synchronized dataset creation.
Fig. 9 - Left: raw GPS. Right: with Kalman filtering.

Evaluation

Case 1: Double Bluesheet Lane

Fig. 10 - Vehicle on symmetrical slippery zones.
Fig. 11 - Profile generated from friction grid.
Fig. 12 - Distance error between predicted and true profile.

Case 2: Asymmetric Surface

Fig. 13 - Vehicle with offset slippery zone.
Fig. 14 - Estimated friction profile (asymmetric).
Fig. 15 - Prediction error across path length.

Slip Ratio Results

More information about control part, refer to the article ProposalTakumi 2023 .

Slip ratio is defined as:

$$ s = \frac{Rw - u}{Rw} \quad \text{Eq (5)} $$

Where:

  • $Rw$: wheel speed
  • $u$: vehicle forward velocity

Visual-based friction prediction reduced slip ratio by 50% compared to wheel-only estimation.

Fig. 16 - Slip ratio: visual-based control (red) vs. wheel-sensor-only (blue).

Conclusion

  • Camera-based approach enables accurate real-time road surface detection
  • Friction grid provides stable and reliable predictions
  • System demonstrated real-time compatibility
  • Traction controller performance improved with reduced slip

This confirms vision as a feasible solution for low-cost and anticipatory traction control in autonomous and semi-autonomous vehicles.

References

  • Proposal of On-board Camera-Based Driving Force Control Method for Autonomous Electric Vehicles
    Takumi Ueno, Hugo Pousseur, Binh Minh Nguyen, Alessandro Victorino, Hiroshi Fujimoto
    IEEE/ASME AIM 2023
    [Access PDF]