Diddo’s new funding will bring its shoppable TV API to streaming platforms

Diddo is an API for streaming services and other platforms to embed shoppable videos, allowing consumers to purchase their favorite characters’ clothing and accessories right on their screens. The company announced Wednesday that it has raised $2.8 million in seed funding.

Diddo was founded in late 2022 by Rishi Nair, Ryan Sullivan and Pamela Chen. Oddly enough, it started as a Google Chrome extension designed for Nair and Sullivan’s mothers, “Selling Sunset” fans who wanted to dress like their favorite reality TV stars. Today, the company has developed an API that uses proprietary computer vision AI technology to identify products in TV shows and movies. The AI ​​also selects comparable products so shoppers can buy dupes for less if, for example, Kim Kardashian’s $700 Balenciaga t-shirt is outside their price range.

The funding round was led by Link Ventures, with participation from Neo, Dante D’Angelo (Valentino), Erica Lockheimer (LinkedIn), Camille Ricketts (ex-CMO of Notion), an anonymous Disney executive, and Scott Forstall , known for leading the Apple team that created iOS, among other things.

The new capital will support product development and expand the company’s eight-person team. The company recently hired Rob Sussman (also a Diddo investor) as COO, formerly CFO of Sundance and executive vice president of MGM+ (formerly Epix).

Diddo has signed deals with 12 companies so far, including DailyMotion, Mux, Highlights app, social sports platform PlayersOnly, film and TV collective The Big Picture, fashion brand Blaire New York, and even more. The company also revealed to us that it is actively in talks with Hulu and another streaming giant.

Image credits: Diddo

Diddo believes its API stands out from competitors thanks to its computer vision technology, integrated into a platform’s video player.

As Nair explained to TechCrunch: “We are the only company doing this so far. These companies are not required to send their videos outside of their ecosystem. [think] it’s not a starter if they have to send their video outside of the API to run computer vision. So what we were able to figure out was integrate our IT vision within their video ecosystem so that we could fully move from video ingestion to business capabilities without leaving.

However, one challenge is that running computer vision on a video watched by millions of users simultaneously is “incredibly heavy on the end user’s device,” Nair admitted. “In order to avoid this problem, we decided to develop the product with a timestamped approach to product documentation. So, we run the computer vision once on the video, where it identifies all the products found in the content and places them in a timestamped database. Because the on-demand content products don’t change, we only need to run it once on our end and don’t require anything from the streamer or end…

Read Complete News ➤

Leave a Reply

Your email address will not be published. Required fields are marked *

1 × three =