There's a very specific kind of chaos that comes with being a PhD student browsing research literature. Tabs multiply. You save a paper somewhere and can't find it again. A conference deadline you were tracking quietly passes. You spend hours searching for journals that match your niche, only to end up with a general-purpose database and ten irrelevant results.
I built Research Tracker because I was living that chaos myself. This post is about why I built it, how I built it, and a few things I learned along the way about browser extensions, privacy, and shipping something real.
The Problem It Solves
Academic reading has a workflow problem. Publishers are fragmented , your papers live on arXiv, PubMed, IEEE, Nature, Springer, and a dozen other platforms with no shared interface. Your notes live in a different app. Your deadlines live in your calendar or, let's be honest, in your memory. And when it comes to finding the right venue for your own work, most researchers use a combination of Google Scholar, advice from supervisors, and guesswork.
Research Tracker tries to solve all three in one place, without requiring you to create an account, share any data, or trust a third-party server with your research history.
The Four Core Features
📚 Paper Library
Auto-detects papers on 10+ publishers and shows a save notification. Full-text search, custom notes, and URL access in one place.
📡 Discover
AI-powered journal and conference recommendations based on your research interests, powered by the OpenAlex API with real impact metrics.
📅 Deadlines
Color-coded deadline tracker. Upcoming, overdue, and due-today are all visually distinct at a glance.
👤 Profile Import
One-click import from Google Scholar, LinkedIn, or ResearchGate. Feeds directly into the recommendation engine.
The Recommendation Engine
This is the part I'm most proud of and that took the most iteration. The goal was simple to state but tricky to implement: given a researcher's interests, surface the journals and conferences where their work would actually fit, and not just any venue that mentions the same keywords.
The engine works in three stages. First, it collects your signal: keywords from your saved paper titles, manually entered interests, and anything extracted from your academic profiles. Second, it queries the OpenAlex API (a free, open academic database) with those terms. Third, it runs every result through a scoring function that looks like this:
The multipliers for h-index and citation volume are small on purpose. I didn't want the engine to just recommend the highest-impact journals regardless of relevance , that's a different problem, and not a useful one for most researchers. The relevance signal has to dominate; prestige is a tiebreaker.
Results are cached in memory for 24 hours so the extension doesn't hammer the OpenAlex API on every click. The cache is keyed by venue ID and cleared on browser restart.
Paper Detection: The Surprisingly Tricky Part
Getting the extension to reliably detect a paper page sounds straightforward. Check the URL, check some selectors, extract the title and DOI. In practice it was more complicated than I expected.
Different publishers structure their pages very differently. arXiv is clean and consistent. IEEE Xplore loads content dynamically and can take several seconds to fully render. Wiley and Springer have changed their DOM structures over time. The solution was a multi-attempt detection strategy: the content script tries to detect paper metadata at 500ms, 1500ms, and 3000ms after page load. If any attempt succeeds, the save notification appears. If the page still hasn't loaded enough structure by 3 seconds, the extension silently fails rather than showing a broken notification.
This covers the vast majority of what a researcher in engineering, biology, or medicine will encounter. Adding new publishers is a matter of writing a new selector configuration in the content script.
Privacy as Architecture, Not Policy
I made a deliberate decision early on: no servers, no telemetry, no accounts. Not as a selling point, but as a constraint that shapes everything else. If all data lives in the browser, I can't accidentally leak it. If there are no servers, there's nothing to breach. If there's no account system, there's no sign-up friction that discourages researchers from even trying the tool.
Here's exactly what gets stored and where:
| Data | Storage Location | Who Can See It |
|---|---|---|
| Saved papers | chrome.storage.sync (your device) | Only you |
| Deadlines | chrome.storage.sync (your device) | Only you |
| Profile keywords | chrome.storage.sync (your device) | Only you |
| OpenAlex queries | Anonymous API request | OpenAlex (search terms only, no personal info) |
The only outbound network request the extension makes is an anonymous keyword query to the OpenAlex API. No cookies, no user identifiers, no analytics. The extension has never seen a single user's email address and never will.
Technical Lessons from Building in Manifest V3
Chrome's Manifest V3 extension standard is stricter than the previous V2. Background scripts are now service workers, which means they can be terminated by the browser at any time. Persistent background pages no longer exist. This changes how you manage state: anything that needs to survive between user interactions has to live in storage, not in memory.
For Research Tracker this mostly meant being careful about the cache. The in-memory OpenAlex cache gets rebuilt on the next recommendation request after a service worker restart. Users never notice because the cache rebuild is fast, but it was a real architectural consideration.
The other lesson: escaping user input before DOM insertion is non-negotiable. Every piece of data that comes from the user or from an API response goes through an escapeHtml() function before touching the DOM. It's a small habit that eliminates an entire class of XSS vulnerabilities.
What I'd Build Next
The most requested feature from early users is BibTeX export. That's next. After that: browser notifications for approaching deadlines (so you don't have to open the popup to check), bulk import from existing .bib files for researchers who already have a library, and better filtering in the Discover tab by discipline and open access status.
The feature I personally want most is a reading list sharing mechanism , something where you could generate a URL that lets a collaborator view (but not edit) your saved papers. That one has more design complexity and I want to get it right rather than rush it.
Go Try It
Research Tracker is free and available on the Chrome Web Store. If you're a researcher who's ever lost a paper they meant to save, or missed a submission deadline, or spent an afternoon trying to figure out where to submit your work , it was built for you.
→ Install on Chrome Web StoreIf you use it and have feedback (good or bad), reach out on LinkedIn. I read everything.