Artificial intelligence changes the method of storage and access to your data. This is due to the fact that traditional data storage systems have been designed to support simple commands from a handful of users at the same time, while today AI systems with millions of agents must constantly obtain in parallel and process large amounts of data. Traditional data storage systems now have layers of complexity, which slows AI systems, because the data must go through many levels before achieving graphic processing units (GPU), which are AI brain cells.
Cloudian, co -founded by Michael TSO '93, SM '93 and Hiroshi Ohta, helps to keep the AI revolution. The company has developed a scalable storage system for companies that helps smoothly flow the data between memory models and AI. The system reduces complexity by using parallel calculations for data storage, consolidating AI functions and data on one parallel processing platform, which stores, downloads and processes scalable data sets, with direct, fast transfers between storage and GPU and CPU.
The integrated Cloudiana computer platform simplifies the process of building AI tools on a commercial scale and gives companies the basis of memory, which can keep up with the growth of artificial intelligence.
“One of the things that people miss artificial intelligence is that it's about data,” says TSO. “You can't get a 10 % improvement in the efficiency of artificial intelligence with 10 percent more data or even 10 times more data – you need 1000 times more data. The ability to store this data in an easy -to -manage manner, and in such a way that you can set calculations in it so that operations can be launched, while the data appears without data transfer – this industry.”
From the myth to industry
As a bachelor student in MIT in the 1990s, TSO was introduced by Professor William Dally to parallel calculations – a type of calculation in which many calculations occur simultaneously. TSO also worked on parallel computers with associate professor Greg Papadopoulos.
“It was an amazing time because most schools had one super computer project-Mit had four,” recalls TSO.
As a PhD student, TSO collaborated with the senior scientist Mit David Clark, a computer pioneer, who contributed to early internet architecture, especially the transmission control protocol (TCP), which provides data between systems.
“As a doctoral student in myth, I worked on disconnected and intermittent network operations for large -scale dispersed systems,” says TSO. “It's funny – 30 years later, that's what I do today.”
After graduating, TSO worked at Intel's Architecture Lab, where he invented the synchronization algorithms used by BlackBerry. He also created specifications for Nokia, which ignited the bell download industry. Then he joined the Intomi, a start-up co-founded by Eric Brewer SM '92, a doctorate '94, which was a pioneer of search technology and distribution of network content.
In 2001, TSO began Gemini Mobile Technologies with Joseph Norton '93, SM '93 and others. The company has built the world's largest mobile messages in the world to deal with a huge increase in data from camera phones. Then, in the late 2000, cloud processing became a powerful way for companies for renting virtual servers when they grow up. TSO noticed that the amount of data collected increases much faster than the speed of network creation, so he decided to rotate the company.
“The data is created in many different places, and these data have their own gravity: it will cost money and time to move them,” explains TSO. “This means that the final state is a distributed cloud that reaches Edge devices and servers. You must enter the cloud to data, not the data to the cloud.”
In 2012, it officially launched Cloudian from Gemini Mobile Technologies, with a new emphasis on helping clients in scalable, distributed, compatible with the cloud storage of data.
“We did not see when we founded the company for the first time, it was that AI would be the final case of using data on the edge,” says TSO.
Although TSO research in MIT began over two decades ago, he sees strong connections between what he was working on and the industry.
“It is as if my whole life was playing, because David Clark and I was dealing with disconnected and intermittent connected networks, which are part of each use of, and Professor Dally worked on very fast, rocky connections,” says TSO, noticing that Dally is currently a senior vice president and a main scientist in the leading company A and Nvidia. “Now, when you look at the modern architecture of the NVIDIA chip and the way they conduct exchange communication, it has Dally's work on it. Thanks to Professor Papadopoulos, I worked on application software with parallel computer equipment without the need to rewrite the application, and this is the problem that we try to solve with Nvidia.
Today, the Cloudian platform uses the architecture of storage of objects, in which all kinds of data – documents, films, data from sensors – are stored as a unique object made of metadata. Presenting objects can manage massive data sets in a flat files, thanks to which it is ideal for unstructured data and AI systems, but traditionally it was not able to send data directly to AI models without transforming data into a computer memory system, creating a bottleneck for companies and energy for companies.
In July, Cloudian announced that he has expanded his storage system of objects with a vector database, which stores data in a form that is immediate through AI models. As the CLoudian data consumption, in real time calculates the form of this data vectors to supply AI tools, such as recommendator engines, search and AI assistants. Cloudian also announced cooperation with NVIDIA, which enables his work memory system directly with AI GPU. Cloudian claims that the new system enables even faster AI operations and reduces calculation costs.
“Nvidia contacted us about a year and a half ago, because GPUs are only useful in the case of data that maintains their occupation,” says TSO. “Now that people realize that it is easier to transfer artificial intelligence to data than to transfer huge sets of data. Our mass storage systems set many AI functions, so we are able to pre- and final data for AI nearby, where we collect and store data.”
First magazine
Cloudian helps about 1,000 companies around the world to get more values from their data, including large producers, financial service providers, healthcare organization and government agencies.
For example, the Cloudian memory platform helps one large manufacturer, use artificial intelligence to determine when each of his production works must be serviced. Cloudian also cooperates with the National Library of Medicine to store research articles and patents, and the national cancer database to store the DNA sequence of tumors – rich data sets that AI models can process to help develop new methods of treating or obtaining new information.
“GPUs were an amazing factor,” says TSO. “Moore's law doubles the number of calculations every two years, but GPUs are in a state of parallel operations on systems, so you can combine GPU together and destroy Moore's law. This scale pushes AI to new levels of intelligence, but the only way to make the GPU work hard, is to supply their data at the same speed that they calculate – and the only way to get rid of all layers and your data.

















