AI feels like the acronym of 2025. Beyond the hype, it promises real-world utility for engineering teams. At Lune, we’re building emissions intelligence that returns the most accurate results, quickly and easily. To do that at scale, we need tools that accelerate data collection, integration simplicity, and climate action — faster.
That’s where LLMs come in. This blog shares:
In recent months, the Lune engineering team has been experimenting with using LLMs in our development workflow. The reviews were originally quite mixed.
Integrated development environment (IDE) models are designed to help programmers develop software code efficiently. I found myself arguing that using LLMs in this application was more hassle than it’s worth. It was causing bugs that took longer to unpick than simply writing the code alone.
But things change incredibly quickly in this space. In the months since that lukewarm take, I've become quite reliant on LLMs in my workflow, as have some of my coworkers.
Nowadays, I try to use LLMs where I know they can reliably perform. The keyword is “reliably”, because I have no desire to add "LLM debugging" to my workload.
Selecting the most relevant emissions factor
To calculate emissions, you generally need an emissions factor. Lune’s database holds over 60,000 emissions factors, all pertaining to a different product or service.
Therefore, emission factor search underpins a large part of automating our offering. This functionality is built on a vector database, which itself uses AI algorithms to enable semantic search. This means we can calculate the emissions of that apple you just bought from the supermarket, rather than a laptop.
Testing information retrieval (IR) system accuracy is tricky because:
Luckily, IR system evaluation measures are well documented, and the implementation of basic maths formulas to calculate performance metrics is straightforward.
The only remaining problem was labelling search results to run tests and calculate metrics. With a lean engineering team at Lune, we don't have the time to spend on tedious labelling tasks. Here we saw another use case for LLMs — allowing us to label data before passing it on to humans for validation.
Our first LLM-powered product was our transaction document estimate API. This feature allows spend management platforms to provide users with emissions estimates calculated using line item data found on receipts and other transaction documents.
Often, spend management platforms use OCR technology to parse invoice/receipt images into text. The output of this process varies by company. However, our API accepts any JSON object, resulting in a smoother integration process.
Extracting information from this unstructured data is a prime use case for AI models. An LLM is capable of extracting the relevant line items and prices from this input, as well as inferring crucial information such as product categories and transaction region. Combining these two data points allows us to select granular emission factors for each item purchased.
We have also leveraged LLMs' capabilities in our tender tool.
Naturally, each logistics company will structure their RFQs differently… Some have coordinates, whereas others have ports, and some give the load in kg, whereas others show the number of containers. These inconsistencies make it difficult to map spreadsheets to the structured format required for estimate calculations.
LLMs' use of text embeddings allows them to capture semantic meaning, enabling them to map columns based on context rather than positioning or string matching like other basic algorithms would do.
For the same reason, we can map data values to Lune data enums. Enums are used to represent a fixed set of related values, making code more readable, maintainable, and less prone to errors.
Using AI to map messy Excel files to our API schema has hugely reduced manual effort on our client's side, accelerating data integration in the process.
We’re using LLMs where they’re truly valuable: in reducing engineering friction, enabling customer self-service, and making emissions data smarter, faster, and easier to access. Keeping this focus as we develop our products is key to ensuring we don't start using AI in places where it's not needed.
That said, I'm certain there are more use cases for the technology within our product. The next learning curve is using the right tooling. I don't just mean the fact that I will eventually succumb to an AI-integrated IDE. I'm also referring to the products available to help streamline the development of LLM apps.
Recently, we dedicated a whole week to getting stuck into LLM tooling. We’ve been creating proof of concepts with products such as N8N and LangGraph. Personally, this shone a light on how simple creating LLM products can be and made me excited for the next project.
To learn more about how you can integrate Lune’s evolving emissions intelligence into your platform, get in touch.
Get the latest updates in the world of carbon tracking, accounting, reporting, and offsetting direct to your inbox.