At the time that I joined Nuna, their core business was focused on supporting value based healthcare. My job at Nuna was tightly intertwined with the business logic needed to support value based healthcare. Rather than forcing you to understand all of this logic, I will put the details in links that you can read if you want to understand the space and the unique challenges better.

If you are interested in learning more about the basics of value based healthcare, please read this summary that I wrote.
TL;DR: Value based healthcare is a new payment model between the insurance company and the provider that uses various approaches to reward the provider based on patient outcomes.
Nuna’s business model was simple enough (at least at a high level):
- Every month (sometimes more often) an insurance company gives us a bunch of data.
- We run a pipeline that processes this data, generates a bunch of information related to the value based program, and stores it in a database.
- We provide a website that allows certified users to view this data.
- Insurance companies pay us a certain amount of money to do this.
Nuna had a very unique problem. Most small companies create a product, but have a very hard time selling it and differentiating themselves from the competition. Nuna had the exact opposite problem – with John and Jini as the sales team, we could sell to pretty much anybody. Our issue is that every value based healthcare program is sufficiently different that it’s really hard to create a scalable product that makes it easy to onboard new customers and programs with very little effort. The technical issues weren’t that hard, but the program logic is.
I spent my first year working on the frontend and the API layer that serves data to the frontend. I had to come up to speed on react hooks, typescript, and SQL. I was able to create a configuration infrastructure that allowed us to layout a page in JSON, where each page has a default configuration and each insurance company could have overrides to suit their needs. This made some parts of the system much more scalable, but not enough to make a real difference.
In 2023 we hired a new CTO over engineering and started designing the next generation product in earnest. I was made the chief architect of the nextgen design and reported directly to the CTO. At this point, I realized that I really had to dig into as many value based programs as I could to better understand the details of how they worked and how they could be configured.
I found that all value based programs require the same stages to evaluate the program, although the business logic varies greatly across the individual programs. This link describes the various pipeline stages and summarizes some of the common logic in each stage. (Bonus points to anybody that actually reads this!)
TL;DR: Some pipeline stages can have neatly defined configuration that accounts for 80% or the scenarios, but the other 20% requires custom logic. Other pipeline stages are really just lists of formulas and calculations.
Once I got a basic understanding of how the pipeline should look and how the program logic behaves, I realized that a whole different set of requirements and needed configuration emerged.
These additional complexities are described in this link.
TL;DR: Extra features are needed to account for receiving claims long after the visit/procedure occurred and extra complexity is needed to handle dynamic changes that can occur during the program year.
Some people understood portions of the above links, but maybe two people in the company understood all of the nuances and requirements described in these links (and these were not the people building the end-to-end system). It took me considerable effort to learn this and to document it.
It became clear that two types of configuration were needed:
- The configuration defining the program logic doesn’t fall into neat boxes with well defined properties.
- Some parts of the program could be described this way.
- Much of the actual program logic is just a list of formulas that require re-weighting, lookups, percentile calculations, special filtering, etc.
- There’s not an obvious way to express the program logic using simple, intuitive configuration.
- A bunch of additional information is needed to manage some of the dynamic changes that can occur during the program year.
- The system has to be very flexible to handle various permutations of this configuration.
- While complex, this extra configuration is much easier to describe.
At the time that I joined Nuna, the program logic was hard coded using scala, which makes it very difficult to handle contract overrules and things like merger/demerger logic. There are simply a LOT of details that they need to get right, but the engineers do not always understand the requirements.
I decided on the following design:
- Represent the program logic using a Domain Specific Language (DSL).
- This puts the program logic into simpler terms than the existing scala code.
- It is optimized to handle common needs easily, such as:
- Defining and managing multiple data stratifiers.
- Defining and managing lists of measures used for quality scoring and payment.
- Have other configuration data that defines other program behaviors.
- Provide a basic execution unit that reads the DSL and other configuration to produce the desired output.
- Interpreting the DSL in the execution unit allows it to automatically apply contract overrides, true ups, etc. and to put the majority of the complex behaviors in a single palace that doesn’t have to be duplicated.
Our goals were:
- Short term: Make it possible to represent any program directly by manually writing DSL code and creating the other logic.
- Long term: Provide a UI that our partners could use to design new programs or to modify existing programs without us having to do any work.
- The UI would support a subset of the DSL capabilities, so the goal was 80% of the programs could be managed using the UI and that 20% would require somebody to write DSL by hand.
- Medium term: Offer some programs that we designed ourselves (i.e we’d write the DSL once and sell it multiple times), but also have a simpler version of the UI that allows partners to make their own tweaks.
The assumption was that we’d start with a somewhat generic DSL, but as we implemented more programs and as new patterns emerge, we’d add functionality that allowed us to handle these patterns more easily. In other words, it was meant to evolve and to improve over time as we learned more.

Creating the DSL and the corresponding infrastructure wasn’t too hard and we were able to implement the majority of the functionality fairly quickly. Some caveats are:
- As much fun as it is to create your own expression parser (seriously!), I couldn’t justify the extra work when 3rd party tools exist. Hence, I used JEXL for expression evaluation which worked fine (except that the error messages were often not very helpful while debugging).
- I implemented a lot of the configuration options mentioned in the “additional complexities” link, but not all of them.

What went right:
- I wrote the DSL code for several programs and everything seemed to work fine.
- We got most of the necessary features done in the execution unit to support existing partners.
- From the DSL, I was able to automatically generate interactive HTML which would allow anybody to review the logic (in detail) for each program (e.g. what calculations were done in each stage,, how the data flowed from one calculation to the next, etc.)

What went wrong:
- Ingesting new partner data (e.g. creating data quality checks, conversion to our common data model, etc.) was handled by another team and it continued to take a very long time to onboard new partner data.
- There wasn’t an immediate market for the programs that we created ourselves, but even if there was, ingesting the new partner data would continue to be the main bottleneck.
- The DSL made implementing program logic much easier, but the long pole continued to the time needed to properly understand a program’s logic.
- The program documents were created to sell the programs to providers, and hence contained a lot of ambiguity, inconsistencies, omissions, and even errors.
- We lacked the tooling required to easily validate our data.
- We encountered one program that didn’t fit the existing DSL model very well (although we later decided to de-prioritize those types of outlier programs).
Ultimately, the biggest issue was that our partners simply did not pay us very much. Considering the effort just to ingest their data, it was very difficult to break even. I wrongfully assumed that we made money off of each partner/program and that the problem was simply scaling to support more in an efficient manner.

The reality was that 1) if everybody in the US was under a value based contract that was managed by Nuna and 2) supporting this volume required us to only triple our staff, we probably would not break even.
Hence, I was not entirely surprised when on Jan 8th 2025 I awoke to an email announcing that they were exiting the value based healthcare business in lieu of a more lucrative project that they began a year earlier (and seems to have great promise).

For the first time in my entire career, I was laid off.