The amazing applications of graphic neural networks

0

The predictive prowess of machine learning is widely hailed as the pinnacle of statistical artificial intelligence. Lauded for its ability to improve everything from customer service to operations, its many neural networks, multiple models, and deep learning deployments are seen as a business guarantee for leveraging data.

But according to Jans Aasman, CEO of Franz, there is only one small problem with this high esteem which is otherwise accurate: For the most part, it “only works for what they call Euclidean datasets where you can just look at the situation, extract a number of salient data. points from that, turn it into a number in a vector, then you have supervised learning and unsupervised learning and all that.

Certainly, a generous share of corporate data is Euclidean and easily vectorized. However, there is a plethora of non-Euclidean multidimensional data that serves as a catalyst for some amazing machine learning use cases, such as:

  • Network forecast: Analyzing all the varying relationships between entities or events in the complex social networks of friends and foes gives incredibly accurate predictions of how any event (like a specific customer purchasing a certain product) will influence attendees. of the network. This intelligence can reorganize everything from marketing and sales approaches to regulatory mandates (know your customer, anti-money laundering, etc.), healthcare, law enforcement, and more.
  • Entity classification: The ability to classify entities based on events, such as a part or system failure for connected vehicles, for example, is essential for predictive maintenance. This capability has obvious connotations for fleet management, equipment monitoring, and other Internet of Things applications.
  • Computer vision, natural language processing: Understanding the multidimensionality of the relationships between words or images in a scene transfigures typical deployments of neural networks for NLP or computer vision. The latter supports scene generation where instead of machines watching a scene of a car driving past a fire hydrant with a sleeping dog nearby, these things can be depicted so that the machine generates that image.

Each of these use cases revolves around high dimensional data with multifaceted relationships between entities or nodes at a remarkable scale at which “regular machine learning fails,” noted Aasman. However, they are ideal for graphical neural networks, which specialize in these and other high dimensional data deployments.

High dimensionality data

Graphical neural networks achieve these feats because graphical approaches focus on distinguishing relationships between data. Relationships in Euclidean datasets are not as complicated as those in high-dimensional data, because “anything that is on a straight line or two-dimensional planar surface can be turned into a vector,” observed Aasman. These numbers or vectors form the basis for generating functionality for typical machine learning use cases.

Examples of non-Euclidean datasets include such things as the many relationships of more than 100 aircraft systems to each other, the links between one group of customers and four others, and the myriad interdependencies of the links between these additional groups. This information is not easily vectorized and escapes the capacity of machine learning without graphical neural networks. “Each number in the vector would actually depend on other parts of the graph, so it’s too complicated,” Aasman commented. “Once things get into sparse graphs and you have networks of things, networks of drugs, genes, and drug molecules, it becomes really difficult to predict whether a particular drug is missing a link with something. something else. “

Relationship predictions

When the context between nodes, entities, or events is really important (as in the pharmaceutical use case referenced by Aasman or any other complex network application), graphical neural networks provide predictive accuracy by understanding the relationships between them. data. This quality manifests itself in three main ways, including:

  • Link prediction: Graphical neural networks are adept at predicting links between nodes to easily understand if entities are related, how and what effect this relationship will have on business goals. This idea is key to answering questions such as “do certain events happen more often for a patient, for an airplane, or in a text document, and can I actually predict the next event,” Aasman revealed.
  • Classification entities: It’s easy to classify features based on attributes. Graph neural networks do this while considering the links between entities, resulting in new classifications that are difficult to achieve without graphs. This app involves supervised learning; predicting relationships involves unsupervised learning.
  • Clusters of graphs: This ability indicates how many charts a specific chart contains and how they relate to each other. This topological information is based on unsupervised learning.

Combining these qualities with data models with common temporal information (including the time of events, that is, when customers have made purchases) generates compelling examples of machine learning. This approach can illustrate the medical future of a patient based on his past and all the relevant events that compose it. “You can say, given this patient, give me the next disease and the next chance you get this disease in descending order of luck,” Aasman noted. Organizations can do the same for customer churn, loan failure, certain types of fraud, or other use cases.

Topological text classification, image comprehension

Graphic neural networks produce transformational results when their unprecedented relational discernment focuses on aspects of NLP and computer vision. For the former, it supports the topological classification of texts, which is fundamental for a faster and more granular understanding of the written language. Conventional entity extraction can identify key terms in the text. “But in a sentence, things can go back to a previous word, to a later word,” Aasman explained. “Entity extraction doesn’t look at that at all, but a graphical neural network will look at the structure of the sentence, so you can do a lot more in terms of comprehension.”

This approach also underpins image understanding, in which graphical neural networks understand the relationship between different images of the same image. Without them, machine learning can simply identify various objects in a scene. With them he can glean how these objects interact or relate to each other. “[Non-graph neural network] machine learning doesn’t do that, ”Aasman said. “Not how all the things on the scene fit together.” Coupling graph neural networks with conventional neural networks can richly describe images in scenes and, conversely, generate detailed scenes from descriptions.

Graphic approaches

Graphic neural networks are based on the neural networks that were originally designed in the 20e century. However, graph approaches allow the former to overcome the limits of vectorization to operate on non-Euclidean data sets of high dimensionality. Specific graphics techniques (and techniques suitable for graphics) aiding in this effort include:

  • Jaccard index: When trying to establish whether or not there should be a missing link between a set of nodes or another set of nodes, for example, the Jaccard index can inform that decision by revealing “how similar two nodes are. in a graph, ”says Aasman.
  • Preferential attachment: This statistical concept is a “technique that they call the winner takes all where you can predict whether someone is going to get it all or if you won’t get anything,” Aasman said. Preferential attachment measures the proximity of nodes.
  • Centrality: Centrality is an indicator of the importance of nodes in networks, which is related to those that lie between other nodes.

These graph approaches and others allow graph neural networks to work with large-dimensional data without vectorizing it, thus expanding the overall utility of enterprise machine learning applications.

Polydimensionality Machine Learning Scale

The critical distinction in the application of graphical neural networks to previous use cases and in the application of typical machine learning approaches is the complexity of the relationships analyzed and the extent of that complexity. Aasman explained a use case in which graphical neural networks made accurate predictions about the actions of world leaders based on inputs spanning most of a year, over 20,000 entities, and nearly half a million events. Such prediction is far from academic when moved to customer behavior, healthcare, or other critical deployments. Therefore, it can impact cognitive computing deployments sooner than organizations realize.

About the Author

Jelani Harper is an editorial consultant serving the information technology market. He specializes in data driven applications focused on semantic technologies, governance and data analytics.

Sign up for the free insideBIGDATA newsletter.

Join us on Twitter: @ InsideBigData1 – https://twitter.com/InsideBigData1



Source link

Leave A Reply

Your email address will not be published.