Predicting fruit harvest with drones and artificial intelligence
- by 7wData
Outfield Technologies is a Cambridge-based agri-tech start-up company which uses drones and artificial intelligence, to help fruit growers maximise their harvest from orchard crops.
Outfield Technologies' founders Jim McDougall and Oli Hilbourne have been working with Ph.D. student Tom Roddick from the Department's Machine Intelligence Laboratory to develop their technology capabilities to be able to count the blossoms and apples on a tree via drones surveying enormous apple orchards.
"An accurate assessment of the blossom or estimation of the harvest allows growers to be more productive, sustainable and environmentally friendly", explains Outfield's commercial director Jim McDougall.
"Our aerial imagery analysis focuses on yield estimation and is really sought after internationally. One of the biggest problems we're facing in the fruit sector is accurate yield forecasting. This system has been developed with growers to plan labour, logistics and storage. It's needed throughout the industry, to plan marketing and distribution, and to ensure that there are always apples on the shelves. Estimates are currently made by growers, and they do an amazing job, but orchards are incredibly variable and estimates are often wrong by up to 20%. This results in lost income, inefficient operations and can result in substantial amount of wastage in unsold crop."
Outfield's identification methods are an excellent application of the research that Ph.D. student Tom Roddick, supervised by Professor Roberto Cipolla, is working on. Tom is part of the Computer Vision and Robotics Group which concentrates on artificial intelligence and Machine Learning, using Deep Learning methods, via Artificial Neural Networks (ANNs).
ANNs are computing systems modelled loosely after the human brain, that are designed to recognise patterns. They interpret sensory data by labelling or clustering raw input. The patterns they recognise are numerical, into which all real-world data, be it images, sound, text or time series, is translated.
Such systems "learn" to perform tasks by analysing examples, generally without being programmed with task-specific rules. For example, in image recognition, the ANN might learn to identify images that contain apples by analysing example images that have been manually labelled as "apple" or "no apple" and using the results to identify apples in other images. They do this without any prior knowledge of apples, for example, apple colours or shapes. Instead, they automatically generate identifying characteristics from the examples that they process.
ANNs cluster and classify by first detecting simple patterns in the data, such as edges in images or sounds in speech, and then gradually build a hierarchy of concepts until complex features like faces or sentences emerge in the data.
[Social9_Share class=”s9-widget-wrapper”]
Upcoming Events
From Text to Value: Pairing Text Analytics and Generative AI
21 May 2024
5 PM CET – 6 PM CET
Read More