Commercializing Research and Inovo Studio

by Kent Lyons, April '23

I recently posted my highlights for CHI '23 in Hamburg Germany. For the last year or so I've been heads down exploring ideas and getting my first product off the ground. While at CHI, I was talking to a lot of folks about what I was up to, and I figured it might be good to reflect on those discussions looking back over the past year of my work with Inovo Studio.

The Inovo Studio model

What am I doing with Inovo Studio? The top level idea is to work on commercializing research. However I actually have a rather specific model for how I am doing that. Several people thought I was doing some sort of consulting, helping others commercialize their research. While I welcome collaborations, that isn't the core activity. Instead, the idea is to be a hybrid between a corporate research lab and a startup incubator.

In a corporate research lab, there is a charter to go and conduct research in some set of areas. The hope is that the results of that research can influence the company in useful ways. Naively that might be to directly impact products, but there are other more indirect ways research can add value for the company as well.

Since directly impacting product from research is tricky (or maybe the flavors of research I've been guided to do in my various roles), a lot of the work ends up dying, even if it seems to have promise. This often occurs to a variety of mismatches between the research itself and the business unit needs (see for example Christensen's Theory).

Instead of that fate, what if we could spin the work out as separate products or businesses? This is hard to do in big companies. By default, they are optimized to sell the current product, not create new ones.

However this is the goal of Inovo Studio. Conduct research, and for the most promising parts, see if we can create stand alone businesses built off that research. This aspect is where the metaphor of an incubator comes in to play. Most incubators cast a net outward for ideas and founders. Instead, we take the most promising work from inside, evaluate it for product market fit, and then scale up. There will be a funnel that progressively narrows from ideas, to research, to market fit, to business. But the hope is a few will make it out the end and be successful enough to fund Inovo Studio to then repeat that loop.

First iteration within Inovo Studio

Last year, after CHI'22, I took some time to decompress and did a fair bit of pottery (I have a youtube channel and Instagram account focused on those if you're curious). It was great to do something tangible and be selective about incorporating technology (or not!). Around the fall of '22, I was feeling the itch to build something more research related and in line with how I envisioned Inovo Studio.

At that point I set off to build QuikQut. The seed of this idea came from two places - one was my own editing process for my pottery YouTube channel. Video editing can be extremely tedious and slow! It was getting to the point where I was avoiding making videos because of that. The other source of inspiration was a series of work at UIST, CHI and other places. Instead of editing video on a timeline, by listening to audio, it instead used a transcript. This idea can be traced back in the research literature about 15 years. At that point, the transcripts were crowd sourced (so expensive and slow). Today, speech recognition is just an API call away. QuikQut diverges from my idealized Inovo Studio model since I didn't do the original research. But it is in the same spirit of taking research to the next level and testing product market fit.

I built the first version of QuikQut in a few weeks. (It helped that I had used similar APIs while at Tesla when I replaced the speech rec system for the fleet). I put together a rough UI and backend, and started using it on my own videos. It indeed was way better than the alternative! I proceeded to spend the next few months taking QuikQut from something only I could use to a minimum viable product (MVP) - something others could use. I then shifted into sales mode in January. I was targeting Youtubers that were monitized (so there was a connection between a better video editing process and increased revenue). I got a handful of trial users and a few sales. But overall it was slow going. The goal in short cutting the Inovo Studio process by starting with existing research was to speed up the time to generating revenue. However that has proven not to be the case so I'm revisiting my own next steps.

Academic Perspectives on Commercialization

In recounting parts of that story with a lot of folks at CHI, there were a few recurring themes that came up about commercialization more broadly.

Managed Risk

First, people were very encouraging. This path is not a typical one, but being in academic flavored industrial research is also rather exceptional these days as well. Beyond that, there were several folks that admired the jump I've made in doing a startup and trying to get research out into the world. From my perspective, the risks aren't all that large. Or maybe more accurately, it's been a very deliberate decision and I've thought a lot about managing the risk. What is the worst that happens? Well most likely it is that I get a software engineering position. That's not my dream job, but it is also not a bad life. I've also lived in silicon valley for a decade and a half and the idea and practice of startups is a very normal thing. I know that is much less the case in other parts of the world. A few people also asked about funding. And at this point I'm not looking for any (for several reasons) and instead am bootstrapping Inovo Studio (hence the path to getting QuikQut up and going quickly).

Parallels between startups and user centered design

I see a few strong connections between early stage startups and user centered research like that which is common at CHI, and a couple strong differences. In terms of similarities, the idea of understanding users/customers, and their needs is a big one. How do you uncover a problem a user is facing? How do you know it is a real problem and not something they're just saying? How do we think about the context in which that problem exists (eg looking at all of the stakeholders involved). The ideas central in user centered design also apply in the startup context, only with the startup you're looking for product-market fit and the MVP.

How to Get Customers? Value

Several folks asked me how to get customers, and here there are some analogies to user studies. How does one go about recruiting participants for a study? You might seek out 20 people with certain demographic characteristics relevant to your research and pay them $20 each. With a startup you need to again target a set of people, your customer base. The difference is instead of you paying them $20 each, you want them to pay you $20 each. And instead of 20 people you want 2000. If you can do that (and continue to do it), you have a product!

So how do you get someone to pay you instead of you paying them? Value. The trick with value is that it is measured in $ and needs to be real as judged by the customer (your own opinion matters very little). It needs to solve a problem for the customer in a way where they would consider giving up their hard earned money. In some situations, spending the money on a product is an easy decision, for some it's worth some consideration, and others people will simply pass. Everyone at CHI decided in one way or another it was worth their money (or their parent organization's budget).

In a B2B context, if I sell something for $100 and you can make $1000, then of course you'd pay the $100. To paraphrase Alex Hormozi, you'd be stupid to say no. This points to a key difference between business and user studies. By paying participants, one doesn't really need to worry about the value of the research. You're compensating participants for their time so they have incentive to try the research. And often participants' opinions are asked about, and for published work that opinion is typically favorable. But that is a much weaker signal of value than them handing you cash.

Once they are a customer, the table is turned and the value they get needs to be real. How many user studies have you seen where at the end of the study they beg you to keep the research prototype? I've head of a handful of these cases, but only a handful. In these situations, the researchers have tapped into some need where the participants want to keep using the prototype even with all of the flaws inherent in any research prototype. In these cases, some real need has been discovered, but is that value? Maybe not. How much are they willing to pay? And is that enough to cover your costs and have a healthy profit margin? During the dot com bubble a common business model was to more less sell $1 for $0.01 to scale fast. Well any customer would be stupid to say no to that - they make $0.99 with each transaction! But that business model is not sustainable. If you can't get enough money from your customers, even if they want it badly, you do not have a business.

Idealization of adoption

I had a few somewhat related discussions where there was a non-realistic idealization of the problem space. There were some strong opinions that the world should be one way whereas in reality it is a different way. Sometimes it is a useful tool to intentionally violate some assumptions of the world as it stands today (more on that in a second). But in the context of creating a product, you can't really do that - you are selling into the world as it exists today.

As mentioned above, HCI actually has a lot of the tools to understand user behaviors and the social contexts in which they are performed. We have methods to look at interaction with a piece of software and understand the numerous factors that go into using it one way vs another. Often things like social or business structures come into play, eg there are incentives to use the software in some particular way. Or the adoption or use of software is dictated by other stakeholders like management, procurement, IT, etc, etc, all with their own policies.

If you build a research prototype and it needs special permissions which can only be granted by IT, you cannot work around IT as you might in a user study. Figuring out and addressing their policies and concerns might be a critical roadblock to the adoption of your research idea. Overcoming that objection (and the dozens like it that exist in the actual practice of usage) are likely not research questions, but instead just work to be done.

Similarly, there is often a hope that once a person sees the potential in some research, that they will just adopt it. That also rarely happens. There are usually non-trivial gaps that need to be overcome. On the engineering side, there's the fallacy of "if they build it, they will come". It is possible they will, but far from guaranteed. I think the same is true on the research side. There is a hope that once the research is done, people will somehow find and use it. If we look at the body of work at places like CHI, and the subset of research where there is that hope of adoption, we can see very little of that work manages to transition by itself. There is more work to be done.

Intentionally violating assumptions

Violating assumptions is often a good way to conduct research. Look at the world and think about what would happen if some key change happened. Often this occurs in technology focused research where there is a lot of improvements with the technology itself. Therefore, if we relax the constraints on what technology is economically feasible today, we might be able to see into the future in different ways. (Although as a bit of a caution, I've seen many papers that assume improvements that seem rather unrealistic given actual technical trajectories. So this approach takes some care).

Alan Kay tells a similar story when they created the Alto at Xerox PARC. The Alto was not affordable, and actually cost about 10x the price they were targeting. But by paying that 10x premium in the research prototype, they could experiment with technologies ahead of their time. When the costs came down (e.g. due to Moore's law), they would have a better idea of what problems users/customers might face and what challenges might need to be overcome.

So returning to research and hopes to commercialize it, one needs to be careful about any assumptions that were violated (maybe implicitly). There needs to be a path where they can be bridged for today's user/customer in a way that is in line with the value discussion above.

Novelty in Academia

Academia, and CHI, UIST, UbiComp, etc, have a strong bias towards novelty. While some academic disciplines greatly value replication of previous results, these fields do not. As such, in our publication processes we have trained ourselves to seek out novelty and get rewarded for finding it. Tech startups often create something new, but it is not a metric that means much by itself for the success of the business. Value is the key metric. Maybe a new technology unlocks some value; but novelty can be a red flag. Why is there not already a market there? Is it because the new research is needed to create it? Or is it there really isn't a market? It is hard to tell ahead of time.

Likewise, it is hard enough to capitalize on an existing market. Creating a new market and then capitalizing on it is even harder. To that end competition is often a good thing. The other businesses have already shown that people are willing to trade their money for the product. In these cases, the most important part is execution - can you actually run the business. Similarly, ideas in that context are not worth much. Execution is. (The same goes for people trying to protect their ideas at an early stage with NDAs or patents - that likely is the wrong area to focus on. Instead create value and execute on it).


Those were a few of the themes of the discussions I had and more broadly what I've been working on. This world is intriguing to a lot of academics it seems, but also foreign. I however think a lot of people with an HCI background are well equipped to solve the additional challenges needed to turn research into a product. If you squint, it has parallels to scaling a user study up from 20 to 2,000 to 2,000,000 people. There are different challenges along that path, but the human-centered thinking is core to making a valuable product.