Jan 08, 2026
Enterprises are sitting on vast quantities of unstructured data, from call records and video footage to customer complaint histories and supply chain signals. Yet this invaluable business intelligence, estimated to make up as much as 90% of the data generated by organizations, historically remained dormant because its unstructured nature makes analysis extremely difficult. But if managed and centralized effectively, this messy and often voluminous data is not only a precious asset for training and optimizing next-generation AI systems, enhancing their accuracy, context, and adaptability, it can also deliver profound insights that drive real business outcomes. A compelling example of this can be seen in the US NBA basketball team the Charlotte Hornets who successfully leveraged untapped video footage of gameplay—previously too copious to watch and too unstructured to analyze—to identify a new competition-winning recruit. However, before that data could deliver results, analysts working for the team first had to overcome the critical challenge of preparing the raw, unstructured footage for interpretation. The challenges of organizing and contextualizing unstructured data Unstructured data presents inherent difficulties due to its widely varying format, quality, and reliability, requiring specialized tools like natural language processing and AI to make sense of it. Every organization’s pool of unstructured data also contains domain-specific characteristics and terminology that generic AI models may not automatically understand. A financial services firm, for example, cannot simply use a general language model for fraud detection. Instead, it needs to adapt the model to understand regulatory language, transaction patterns, industry-specific risk indicators, and unique company context like data policies. The challenge intensifies when integrating multiple data sources with varying structures and quality standards, as teams may struggle to distinguish valuable data from noise. How computer vision gave the Charlotte Hornets an edge  When the Charlotte Hornets set out to identify a new draft pick for their team, they turned to AI tools including computer vision to analyze raw game footage from smaller leagues, which exist outside the tiers of the game normally visible to NBA scouts and, therefore, are not as readily available for analysis. “Computer vision is a tool that has existed for some time, but I think the applicability in this age of AI is increasing rapidly,” says Jordan Cealey, senior vice president at AI company Invisible Technologies, which worked with the Charlotte Hornets on this project. “You can now take data sources that you’ve never been able to consume, and provide an analytical layer that’s never existed before.” By deploying a variety of computer vision techniques, including object and player tracking, movement pattern analysis, and geometrically mapping points on the court, the team was able to extract kinematic data, such as the coordinates of players during movement, and generate metrics like speed and explosiveness to acceleration.  This provided the team with rich, data-driven insights about individual players, helping them to identify and select a new draft whose skill and techniques filled a hole in the Charlotte Hornets’ own capabilities. The chosen athlete went on to be named the most valuable player at the 2025 NBA Summer League and helped the team win their first summer championship title. Annotation of a basketball match Before data from game footage can be used, it needs to be labeled so the model can interpret it. The x and y coordinates of the individual players, seen here in bounding boxes, as well as other features in the scene, are annotated so the model can identify individuals and track their movements through time. Taking AI pilot programs into production  From this successful example, several lessons can be learned. First, unstructured data must be prepared for AI models through intuitive forms of collection, and the right data pipelines and management records. “You can only utilize unstructured data once your structured data is consumable and ready for AI,” says Cealey. “You cannot just throw AI at a problem without doing the prep work.”  For many organizations, this might mean they need to find partners that offer the technical support to fine-tune models to the context of the business. The traditional technology consulting approach, in which an external vendor leads a digital transformation plan over a lengthy timeframe, is not fit for purpose here as AI is moving too fast and solutions need to be configured to a company’s current business reality.  Forward-deployed engineers (FDEs) are an emerging partnership model better suited to the AI era. Initially popularized by Palantir, the FDE model connects product and engineering capabilities directly to the customer’s operational environment. FDEs work closely with customers on-site to understand the context behind a technology initiative before a solution is built.  “We couldn’t do what we do without our FDEs,” says Cealey. “They go out and fine-tune the models, working with our human annotation team to generate a ground truth dataset that can be used to validate or improve the performance of the model in production.” Second, data needs to be understood within its own context, which requires models to be carefully calibrated to the use case. “You can’t assume that an out-of-the-box computer vision model is going to give you better inventory management, for example, by taking that open source model and applying it to whatever your unstructured data feeds are,” says Cealey. “You need to fine-tune it so it gives you the data exports in the format you want and helps your aims. That’s where you start to see high-performative models that can then actually generate useful data insights.”  For the Hornets, Invisible used five foundation models, which the team fine-tuned to context-specific data. This included teaching the models to understand that they were “looking at” a basketball court as opposed to, say, a football field; to understand how a game of basketball works differently from any other sport the model might have knowledge of (including how many players are on each team); and to understand how to spot rules like “out of bounds.” Once fine-tuned, the models were able to capture subtle and complex visual scenarios, including highly accurate object detection, tracking, postures, and spatial mapping. Lastly, while the AI technology mix available to companies changes by the day, they cannot eschew old-fashioned commercial metrics: clear goals. Without clarity on the business purpose, AI pilot programs can easily turn into open-ended, meandering research projects that prove expensive in terms of compute, data costs, and staffing.  “The best engagements we have seen are when people know what they want,” Cealey observes. “The worst is when people say ‘we want AI’ but have no direction. In these situations, they are on an endless pursuit without a map.” This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff. It was researched, designed, and written by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review. ...read more read less
Respond, make new discussions, see other discussions and customize your news...

To add this website to your home screen:

1. Tap tutorialsPoint

2. Select 'Add to Home screen' or 'Install app'.

3. Follow the on-scrren instructions.

Feedback
FAQ
Privacy Policy
Terms of Service