Acta Studios Logo
Akshat Mittal

Akshat Mittal

Co-Founder Read Papers Fast
Joined June 2025
Sep 1

Weekly Update: September 1, 2025

✅ What did I get done?

  • Finished a rough version of the prototype with Gemini integration. The system can now control map layers, schedule analysis tasks, and display results using natural language.
  • Had two conversations with industry professionals:
    • First: Understanding overall structure and current workflows in the renewables space.
    • Second: Deep dive into availability and use of weather data for modeling energy potential.
  • Key takeaway: While OSM data is useful for certain cases, there are richer datasets available via government sources. Satellite-based weather data can be effectively mapped to solar and wind potential.

🧠 Learnings

  • Natural language interfaces powered by AI are fundamentally changing how users interact with data tools. Learning how to build these systems positions us to lead in designing intuitive, accessible solutions.
  • We're actively navigating the trade-off between building a feature-rich product and launching early enough to validate. Finding the right MVP scope is a strategic challenge.

🧱 Limits

  • No direct access to decision-makers yet, which is slowing validation.
  • Rewriting core parts of the codebase. A clear one-month roadmap would help structure development alongside ongoing marketing efforts, enabling us to incrementally improve the product during outreach.

🎯 Goals for this week

  • Define a clear one-month product and marketing roadmap.
  • Break it down into biweekly deliverables.
  • Begin executing the first set of goals this week.
Aug 25

Weekly Update: August 25, 2025

🗓️ Weekly Update: August 18, 2025

🎯 North-star Metric

  • Number of paying customers: 2
  • Monthly Recurring Revenue: €36

✅ What did I get done?

  • Completed ~80% of the prototype: all previous goals met except deployment , We can demo this on tuesday along with AI report genration.
  • Spent ~10% of my time on business analysis—framing the size and urgency of the problem. I drafted a working hypothesis here.
    • Key insight: 80–90% of real estate and plot consulting companies have fewer than 20 employees, which may impact how we price, position, and sell the product.
  • Dedicated ~10% to conversations with peers who are tackling adjacent problems (not direct customers yet):
    • One conversation emphasized the need to break the product into clear, progressive steps.
    • Got a valuable suggestion to incorporate weather data APIs for solar and wind modeling.
    • Learned about the limitations of infra simulations—what’s feasible at various scales and what remains challenging.
    • Two more conversations are scheduled for Monday.

🧠 Learnings

  • The way we build infrastructure and use energy is constantly evolving. Understanding that complexity—and how decision-making varies by org size and type—will be crucial to finding the product market fit.

🧱 Limits

  • No direct access to decision-makers yet, which is slowing validation.

🎯 Goals for this week

  • Find at least 10 people who match our ideal customer profile and validate the core assumptions behind the product.
  • Prioritize outreach and user conversations—coding will take a back seat this week to focus on validation.
Aug 18

Weekly Update: August 18, 2025

🎯 North-star Metric

  • Number of paying customers: 2
  • Monthly Recurring Revenue: 36€

✅ What did I get done?

  • Started building a plot analysis tool for datacenter and energy projects using geo-data.
  • Added power and water infrastructure layers to the prototype.
  • Had two conversations about the feasibility and potential bottlenecks of our approach:
    • Both showed strong support and interest, with possible follow-ups scheduled for this week.
  • Created a clear product vision:
    • Defined core features in a Product Dev Sheet.
    • Built a Figma prototype with UI flow.
    • Aligned on a timeline to complete the working prototype by Tuesday.

🧠 Learnings

  • Accurate distance analysis relies heavily on the resolution and alignment of geo-data layers.
  • a lot of innovation can be done in this space

⛔ Limits

  • No major blockers, but:
    • Minor delays due to figuring out how to load OSM data and set up a server to serve layers.

🎯 Goals for this week

  • ✅ Integrate and render at least two custom GIS layers accurately on the map.
  • ✅ Implement distance-to-infrastructure and land-type classification logic.
  • ✅ Deploy working prototype
Aug 12

Weekly Update: August 12, 2025

🌟 North-Star Metric
Paying Customers: 0

✅ What I Got Done

  • Brainstormed with Elias and Louis on numerous product ideas, narrowing down to three strong contenders and selecting the one that resonated most with Elias and me.

🧠 Learnings / New Thoughts

  • Realized the importance of deeply understanding teammates’ priorities and perspectives — this helped us align on a product concept that excites all stakeholders.

⛔ What's Limiting Right Now

  • Current laptop is too slow to limiting, which is leading to some frustration. working to find a replacement this week.

🎯 Goals for This Week

  1. Draft and share a one-page product vision doc with the team.
  2. Have a demo with at least 5 interactive layers ready for internal review.
  3. get internal feedback
Aug 5

Weekly Update: August 5, 2025

🌟 North-Star Metric

  • Paying Customers: 1 (Elias)

✅ What I Got Done

  • Integrated Stripe payments and confirmed successful checkout flow
  • Fixed major speed bottleneck, reducing page load time from ~4s to < 1s

🧠 Learnings / New Thoughts

  • Chose to maintain pricing discipline by rejecting early low-paying users

⛔ What's Limiting Right Now

  • long term vision for the product

🎯 Goals for This Week

  • Trigger paywall appearance 100+ times via organic or targeted traffic
  • Post in 50 relevant Reddit/Discord communities with tailored outreach
  • Fact-check 2,000 data points using semi-automated workflow
  • Hit 10,000 total page views
Jul 28

Weekly Update: July 28, 2025

🌟 North Star Metric

  • Number of paying customers: 0

✅ What I Got Done

  • Building check functionality
  • Hard sign-up screen implemented on 3rd attempt
  • Better code structure with DB and sign-up in mind
  • Error-free share functionality
  • Chat functionality updates
  • Survey DB and PostHog integration

🎯 Goals for This Week

  • Stripe payment integration
  • Continue code fixes and feature implementation
Jul 15

Weekly Update: July 15, 2025

🧭 North-star Metric

  • Number of PDFs successfully parsed and readable through our interface
  • Average session duration per user when engaging with documents

What I Got Done

  • Implemented functionality to extract authors from search input and use AI to match with the most relevant author profile.
  • Built and deployed new query suggestion feature based on users’ previous queries.
  • Researched OpenAlex’s approach to identifying related papers to explore how we can integrate or replicate their method for improving paper discovery.

💡 Learnings / New Thoughts

Realized I may be too attached to our current MVP. YC advice reminds me to stay detached until we’re building something that clearly resonates. The product works, but it’s not yet novel or differentiated — and that’s okay at this stage.


🚧 What’s Limiting Right Now

The current solution feels too close to traditional academic search engines — it’s functional but lacks a distinct perspective or edge. I’m struggling to see what will make this product stand out or feel indispensable to users. This lack of uniqueness is making it hard to stay inspired and push bold ideas forward.


🎯 Goals for This Week

  • Conduct 2 user interviews with academic researchers to understand how they currently explore related papers and where they get stuck.
  • Co-ideate with Eli and produce 3 UI concepts that help users narrow down paper selection and run focused queries.
  • Prototype and test a workflow where users can select a subset of papers and run batch LLM queries; evaluate usability with at least 1 user.
Jul 7

Weekly Update: July 7, 2025

🧭 North-star Metric

  • Number of PDFs successfully parsed and readable through our interface
  • Average session duration per user when engaging with documents

What I Got Done

  • Built a PDF upload and search demo using a RAG (Retrieval-Augmented Generation) pipeline for semantic search
  • Implemented user authentication (Sign Up and Login) to track returning users
  • Added a post-paper survey prompt to collect structured user feedback
  • Integrated OpenAlex and Semantic Scholar APIs to improve academic paper discovery
  • Continued exploring connected papers and citation graphs to enhance filtering by relevance
  • Structured the codebase to support parallel development of both "read" and "find" components

💡 Learnings / New Thoughts

  • Gained a deeper appreciation of how LLMs act as a new kind of programmable interface—allowing functionality to be expressed and executed through natural language up to a meaningful extent

🚧 What’s Limiting Right Now

  • Lack of a deeper, grounded understanding of the core user problem in academic research—more discovery and validation is needed

🎯 Goals for This Week

  • Reactivate and debug the "read" pipeline
  • Implement text and element tree-type fetching for connected papers
  • Conduct two user interviews
  • Await visa feedback
Jun 30

Weekly Update: June 30, 2025

🧭 North Star Metric

Paper Searches through Find Papers Fast: 25
Target for this week: 100 searches


✅ What did I get done?

  • Integrated PostHog and a search API to enable analytics and paper search functionality in Find Papers Fast.
  • Collected initial user feedback on search results; early insights suggest.
  • Ran a form to identify users’ most painful problems.
  • Scheduled visa appointment for Germany (personal milestone).

⛔ What is limiting right now?

No major blockers.


🎯 Goals for this week

  • Reach 100 completed paper searches via Find Papers Fast.
  • Collect 50+ user emails and pain point submissions via the form.
  • Integrate OpenAlex API and validate improvement in result breadth.
  • Complete and submit German visa application.
Jun 23

Weekly Update: June 23, 2025

🧭 North-star Metric

Primary metrics:

  • Number of PDFs successfully parsed and readable through our interface
  • Average session duration per user when engaging with documents

✅ What did I get done?

  • Added extracted reference links and notes to the frontend
  • Refactored code into modular components for PaperC
  • Generated ordered topics for PDF documents
  • Enabled uploading and linking of PDFs to relevant summaries
  • Implemented paper fetching based on generated keywords

💡 Learnings / New Thoughts

Using LLMs has significantly increased my coding throughput—about 3x more code in the same amount of time—but this can also lead to messier structure and less deliberate design. I’m adjusting by building in more upfront planning and modular architecture.

I'm also realizing how LLMs can act as makeshift backends through prompting, enabling surprisingly structured outputs without traditional API logic.


🚧 What is limiting right now?

I'm currently struggling with a lack of inspiration and direction when it comes to deepening the paper search experience. This seems rooted in a deeper uncertainty.


🎯 Goals for this week

  • Restructure document-fetching logic into clean, testable service modules
  • Add search options that allow users to explore a broader set of research papers
  • Conduct 2 user interviews focusing on how users currently search for relevant academic papers
Jun 16

Weekly Update: June 16, 2025

🧭 North-star metric Primary metrics:

  • Number of PDFs successfully parsed and readable through our interface

  • Average session duration per user when engaging with documents

✅ What did I get done?

  • Deployed the larger GROBID model, enabling broader extraction coverage including titles, authors, formulas, figures, references, and footnotes/endnotes.

  • Integrated frontend to dynamically fetch and display all extracted data from the GROBID output

💡 Learnings / New Thoughts

  • Cursor’s inline interaction model inspired the idea of treating research papers more like modular, interactive codebases

  • Thinking of papers as structured, navigable artifacts (like repos) could help drive UI and UX decisions

  • Realized that users prefer writing their own prompts; we should embrace customization rather than abstract prompts away

  • Structuring the document well is a prerequisite for building effective search and other layered features

🚧 What is limiting right now?

  • No external blockers currently

  • A potential internal constraint: lack of structured daily goals may be reducing my productivity ceiling—experimenting with tighter day-to-day objectives

🎯 Goals for this week

  • Implement clean, intuitive UI with proper loading states for each document section

  • Fully link all extracted content (text, images, formulas) to their correct positions within the document

  • Extract and render formulas accurately within the reading interface

Jun 9

Weekly Update: June 9, 2025

🧭 North-star metric Number of PDFs uploaded and read via Read Papers Fast

Surpassed this week’s target—very happy with the progress and system performance.

✅ What did I get done?

Deployed GROBID and Python microservice to production, improving PDF parsing throughput and automation.

deployed a machine learning model (PubLayNet) for scientific image extraction.

Implemented a CI/CD pipeline reducing deployment to production time, enabling faster iteration.

Refactored and updated the database schema to better support linking extracted content (images, text).

💡 Learnings / New Thoughts

Realized that striving for perfection slows down user feedback—shipping early, even if imperfect, is more valuable.

Practicing more flexibility in defining MVP features is accelerating collaboration and progress.

Letting go of anxiety around launch readiness is helping me stay focused on learning from real usage, not just planning.

🚧 What is limiting right now?

Lack of a dedicated remote machine is delaying model training and experimentation cycles.

No clear timeline or milestone structure for VISDA, making prioritization and resource planning difficult.

🎯 Goals for this week

Set up PostHog with tracking for at least 5 critical user events across upload and reading flows.

Link extracted images to the correct content sections.

Extract and render formula in the reading interface.

Jun 2

Weekly Update: June 2, 2025

📝 Context
Back after a 10-day vacation and some much-needed time with family. Feeling a bit under the weather today, so taking it slow while ramping back up.


✅ What did I get done?

  • Read Fast architecture papers and diagrams, gaining a clear understanding of how we will access the database and how the Python and Next.js pipeline will operate.
  • Initialized a clean microservice repository in Python with proper import structure; migrated legacy code to begin integration.
  • Completed GCP verification steps to enable access and begin validating the cloud environment for CI/CD setup.

💡 Learnings / New Thoughts

  • Stepping away from coding for a few days helped me return with renewed focus and a broader strategic lens. I was able to see technical decisions more clearly and prioritize better.
  • Beginning a meditation practice revealed how crucial mental clarity is to sustaining deep work. It's prompted me to rethink my daily schedule to better support focus and energy management.

🚧 What is limiting right now?

  • Insufficient cloud quota in the Netherlands region is preventing access to GPUs, which is slowing down testing and deployment of resource-intensive components.
  • Need greater clarity on the direction and requirements for image, table, and formula extraction, especially regarding output format and integration with the rest of the pipeline.

🎯 Goals for this week

  • Finalize the CI/CD pipeline such that Python services are integrated with GROBID and the Next.js frontend, enabling full deployment and testing. This will create a stable foundation for building and iterating on further extraction and integration features.
  • Ensure that all extracted images are accurately matched to their corresponding figure metadata and correctly referenced in the paragraphs where they should be displayed.
  • Test the end-to-end pipeline using Gemini-generated keywords and structured GROBID-extracted data to validate the integration and information flow.