Blog
Thoughts on ML, computer vision, and AI research
Circuit Extraction: Interpreting Object Detectors
Using activation patching and co-activation analysis to extract the minimal computational circuit for pot detection in Faster R-CNN.
Object Detection on Drone Orthomosaics with SAM
An overview of using Meta's Segment Anything Model for automated object detection in high-resolution aerial imagery, with applications in precision agriculture.
Sparse Linear Probing for Efficient Detection
Using L1-regularized linear probes to identify minimal feature subsets from SAM and Faster R-CNN that are sufficient for pot detection.
Extracting Features from Vision Model Backbones
A technical guide to extracting and visualizing internal representations from SAM and Faster R-CNN for interpretability research.
Mechanistic Interpretability for Agricultural AI
Exploring how mechanistic interpretability techniques can help us understand what vision models learn about agricultural environments and build more trustworthy AI systems.
SAM vs Faster R-CNN: A Practical Comparison
Comparing Segment Anything Model and Faster R-CNN for aerial object detection—architecture, fine-tuning approaches, and when to use each.
Fine-Tuning Vision Foundation Models
A practical guide to fine-tuning strategies for vision models like SAM and Faster R-CNN, with insights on data efficiency and domain adaptation.
Building a GeoTIFF Object Detection Web App
A walkthrough of building a web application for running Faster R-CNN inference on geospatial imagery with FastAPI, WebSockets, and Leaflet.
Training Faster R-CNN for Geospatial Object Detection
A deep dive into training object detection models on aerial imagery, from SAM masks to production-ready Faster R-CNN with hard negative mining.