Combining drones and Pl@ntNet: the winning bet of OFVi and CoForFunc to better understand tropical forest canopies in Central Africa

From February 2nd to the 4th, the Pl@ntNet team attended the annual meeting of the CoForFunc project to present its activities around a new development: the identification of tropical forest trees from images captured by drones!

This new feature, implemented for the first time as part of the XPrize competition (which we had mentioned here!), aimed to explore the potential of using drone-based canopy photographs to identify tropical trees in Brazilian forests. The results obtained at the time were already very encouraging.

This work is now continuing within two projects (OFVi and CoForFUNC) focused on the forests of the Congo Basin, a global biodiversity hotspot that is highly threatened by climate change and anthropogenic pressures. These two projects are expected to make this new feature operational and to apply it to various research questions, such as the structure and composition of these forests, as well as their phenological variations over time.

This work is based on the large number of images produced by UMR AMAP over several years in these forests. The Pl@ntNet model thus benefited, for its training, from more than 10,000 annotated photographs of tree crowns, covering over 100 species from southern Cameroon. This dataset, rare in terms of its quality, completeness, and drone-based format, enabled the Pl@ntNet model to achieve remarkable performance in species identification directly from the canopy.

Today, this technology paves the way for rapid, large-scale, and low-cost monitoring of tropical forests. This new feature opens up many research perspectives, such as a better understanding of forest structure and dynamics, as well as their responses to environmental change.

Beyond the model’s promising performance, a more detailed analysis of the model’s “errors” showed that the lower performance observed was more often linked to human errors in the training dataset (misidentification or poorly framed images) rather than to limitations of the model itself.

This finding once again highlights the crucial importance of validated reference data, produced in sufficient quantity and quality by experts, for the development of reliable tools for automated biodiversity monitoring.

These results open up opportunities for further improvement, notably to strengthen the model’s ability to automatically exclude situations most prone to errors, while ultimately expanding both the number of identifiable species and the scale of analysis.