The root of the problem
Imagine an elephant peeking through your kitchen wall and a polar bear scavenging through the trash. These are real issues people face in different parts of the world. When Alasdair from Arribada shows me these pictures and talks about their work in reducing animal-human conflicts, I am all in. The field tests will be adventurous.
Human-animal conflict is a big issue people have been trying to crack for a long time. Prevention and not defence is the key here to avoid harm. We want safety, but wildlife wants food! So how to deal with this? Well, an early detection system to warn the rangers and make some sound and light to scare them away could just do the job. But here’s the catch we can’t trigger it for everything that passes by. It should be activated only by the troublemakers, so the cameras need to be smart and recognise these troublemakers. Now with the advancements of technology, AI comes to the rescue and makes it very possible on a low budget. In the world of conservation most times ‘low budget’ isn’t just a suggestion, it’s the rule. The idea is simple, create a smart camera which runs an app with an AI model to recognise what is in front of it. Simple yet complicated! Even with the most advanced recognition algorithms it needs a lot of data for training. Otherwise, it’s like your average teenager – all gear and no idea. Garbage in, garbage out.
Keep it simple stupid – KISS
I get the full gist of Arribada’s work so far and I immediately see the main issue. Even with all available training sets you can never train a model for all possible angles and all possible environments. So my idea is different. Instead of trying to create this universal dream team model, create a device which can be trained after the installation. In other words, ship it, install it, and train it remotely.
The camera is the next big piece of the puzzle. There are two options. Both with advantages and disadvantages.
CCTV
- low-cost
- robust housing
- only 5-10m night range
- unusable in limited visibility – heavy snow, rain or fog.
Thermal
- accurate even at 30+ meters
- mechanical shutter which freezes at low temps.
- easier to train – the thermal signature is very distinct.
New way for flu detection
Based on Arribada’s extensive research the FLIR Lepton 2.5 thermal camera is the choice for the prototyping. Alasdair sends me all the hardware and few days later there it is – my first-ever thermal image. The black nose must be an early sign of the flu!
The art of budget prototyping – innovating from the balcony
So the first field test is right on my balcony. I quickly find out that the box leaks and I discover that cling film is not just for sandwiches: it is perfect for prototyping on a budget. What is next? Ooh yeah, overheating. Another upgrade, or more of a downgrade. I ditch the enclosure and replace it with a simple plastic food container from the nearby hardware store. It’s a two-for-one fix – cooling and accessibility. Not field-ready but good enough for tinkering.
Feeling happy with my progressive thinking I’m about to hit the coding zone, but wait – another bump. The camera adapter is loose and the camera pops up randomly. The old cable ties trick is a no-go and this is the end of it. All the tinkering damaged it somehow so time for plan B. Will use a CCTV camera instead – it is cheap and tough.
Code, collaborate and celebrate
The hardware is all set, so it’s time to dive into coding. We start with Golang but quickly realise that Python is the dominant language in this domain, so we switch. With few more projects on the table, balancing becomes challenging for me so I decide to bring in an outside collaborator. – that is when Tudor jumps in to help. After few brainstorming sessions, we settle on the flow from the schematic below. The Rpi runs motion detection, utilising Edge Impulse for training models and classifying the images. Every few hours the app exports the unknown results back to Edge Impulse for manual tagging and training of an improved model. The new model is loaded back into the app.
Soon enough we have a functional demo. And it’s working! We even did a short presentation to Mark from the Balena team. Mark shares that they work on a similar idea with a project called “Bird Watcher” so at some point, we can join forces. The power of open-source!
Ready to roll, nowhere to go
Alright, so we have this cool demo, but detecting cars is boring. What we’re really after is a wildlife challenge!
Back when I was in Kenya I had a chat with the camp owner about the smart camera project and initially, he seemed interested, but unfortunately, it led nowhere.
Then, there’s Fabian from South Tyrol, Italy, who discovers our project and is keen on doing some trial runs. He’s working on his master’s in wildlife ecology and management, focusing on a project similiarly called – “Promote the coexistence of human and wildlife”. Wolves and bears in the Alps In particular. Turns out, that getting wolves on camera is not easy. Fabian thinks they’ve got this sixth sense for dodging cameras, maybe because of old anti-poaching instincts. So we are focusing on Bears. They’re easier to capture and notorious for hitting beekeepers’ hives for a honey feast so they could be super happy if our project can help them out. It all sounds good until we reach the topic of the budget or the lack of it! Fabian’s efforts to secure funding hit a dead end. In general, I’m totally okay with sponsoring the whole thing, but I always ask for a split in the budget. It’s my reality check – a way to measure that it is a real commitment and not just talk.
Early in our chats, Fabian mentions Conservation X Labs and their Sentinel project which is a production-ready device of exactly what we are trying to build. I contacted them immediately and Henrik from their team explained all about it.
- It uses openMV and Edge Impusle which is very close to our idea.
- a dashboard for managing devices and uploading new models to cameras remotely
- the network connection is established using GSM or satellite
We kicked around the idea of collaborating by using our work for an open-source community version of Sentinel, but this also doesn’t lead anywhere.
So here we are: pumped and ready and still looking for our first trial spot!
There is light in the tunnel. First trials!
A few discouraging months later I decided to make a post in the Wildlabs forum and bingo! We are teaming up with Lars from the Zackenberg Research Station. First, we will deploy the prototype in a Zoo in Denmark and later in their research centre in Greenland. All of a sudden the discord channel gets very active. I am surprised and excited. What looked like a dead end is now in super active development. Who knew that one simple post could be the black swan event?
What is even more exciting is that Lars mentioned more projects trying to solve the same issue. Kim with the StalkedByTheState project and Tim from the provocative “Hack the Planet” initiative and their AI camera trap project. What does this mean? Competition or collaboration? The basic human nature dilemma. I am a big believer that the secret is finding the right incentives. There are plenty of use cases and each approach will have cons and pros so I am quite optimistic that this will be a great collab. The first few days already prove it!
Dispelling the Illusions – the false positives decease
System deployed, and images are coming. A very rocky start full of false positives, but in AI this is expected. More training data, some tunning and bug-fixing and things are looking great.
A pause(for now). The false positives won, but by vision remains
Moving the camera to a new location brought an avalanche of new false positives so the model needs much more tweaking. With a deep breath and a few silent tears, it’s time to call it! My vision remains though I still believe that such a community device has great potential. I feel that this story isn’t over yet, but for now, I turn my focus, to other stories waiting to unfold.
The nerd section
- Github repo
- Research links
- wildlife insights – online service for sharing and tagging wildlife images
- very good article about training a model by using existing data sets
- Lila – image data sets
- ommdetection – object detection toolbox based on PyTorch
- OpenMV – embedded solution for lepton
- Balena – Bird Watcher – similar prototype
- Conservation X Labs – Sentinel a commercial solution from.
- mturk – turnkey data labelling service from Amazon
- Polar Bears International – radar motion detection article, video1, video2
- HackThePlanet – a project to modify trap cameras with AI
- StalkedByTheState – a project that uses the yolov7 preset model(recently changed to open source)
- Science For Conservation – a mesh of cameras ($1k each) to increase the range and send the images to a central location. intro video, GitHub, an article on the setup
- Other similar open-source projects