Back

 Industry News Details

 
Here’s what happening at The Global Developer Virtual Bootcamp 2022 Posted on : Dec 18 - 2021

GLOBAL DEVELOPER VIRTUAL BOOTCAMP  IS ON JANUARY 01 - MARCH 31 2022.

And we’re off to the technology races! Yes Global Developer Virtual Bootcamp dives deep into the DevOps solutions that power AI, Full Stack, database(SQL & NoSQL), Multi cloud, big data, programming languages. It’s shaping up to be a productive and opportunity-filled day. Don’t have a pass yet? No problem. Simply register here and then get ready to attend Bootcamp.

Here’s a quick look at what you — and more other attendees — can experience today. Check out the event Over View and Agenda for full descriptions and then get ready to learn, connect and network!

5000+ Speakers addressed at our past events all over the globe. Fox news channel covered New York Event.

Previous Event Speaker list : http://www.globalbigdataconference.com/speakers.html

First up: Agenda highlights

Workshop: The mathematics of Deep Learning  (Ganapathi Pulipaka, Chief AI Scientist, Deepsingularity LLC)

Kubeflow Workshop (Alex Aidun, Director, Arrikto and Josh Bottum, Kubeflow Community Product Manager, Arrikto)

Workshop: Deep Learning with Graphs (Sujit Pal, Technology Research Director, Elsevier)

Now serving up a presentations:

The mathematics of Deep Learning: Join Chief AI Scientist of Deepsingularity LLC, Ganapathi Pulipaka.
The past decade has shown high-precision and best-performing AI systems running on just smartphones  for  speech  recognition  with  neural translation  architectures  implementing  deep learning techniques. The deep neural networks are advanced technique that contain thousands of processing nodes with millions of neurons which are feed-forward as the data between each neural traverses in one direction. Inspired by the human brain, each neuron connected to a node receives a number of connections, each such connection is assigned as a weight by the node. During the active stage of the neural network connection, every time, the node receives a different data item, it multiplies the associated weight and adds the resulting products together producing another single number. All neural network connections are defined with a threshold value, if the produced number is below the defined threshold value, the node can longer pass the data to the next node. The structure of the neural network contains an input layer, hidden layer, and output layer. The hidden layer can contain a single hidden layer or it can contain multiple layers as well. The neural network with a single hidden layer is constrained with its capacity for spatial expressions. Hence, increasing the number of hidden layers increases the efficiency of the overall neural network. However, the neural network design has to be architected in a way that it is efficient. This session provides in-depth coverage on neural network architectures, linear algebra, calculus, and backpropagation algorithm implementations in TensorFlow and Python.

Kubeflow Workshop: Join Director and VP of Arrikto, Alexander Aidun and Josh Bottum.
The Arrikto Learn Kubeflow Workshop is geared towards MLOps Engineers and Data Scientists, who are responsible for rapidly delivering an efficient enterprise-grade ML platform using Kubeflow. The session will last approximately 4 hours and will feature a combination of PPT presentation, on screen demonstrations and hands-on labs facilitated by Kubeflow Community and technical training professionals from Arrikto. Arrikto is a core code contributor to Kubeflow and leads two of the six Kubeflow Working Groups.

Deep Learning with Graphs: Join Director of Elsevier, Sujit Pal.
A graph is a data structure composed of nodes interconnected by edges. Many real world data can be represented by graphs, in application domains as diverse as social networks to biochemistry. Graph Neural Networks (GNN) are a relatively new type of Deep Learning architecture that have evolved to work effectively with these data structures. Traditional network architectures such as Convolutional Neural Networks (CNN) and Recurrent Neural Networks are designed around the idea of leveraging spatial and temporal locality respectively, and are thus optimized for use in 2-d and 3-d images and sequential data such as text, audio, and time series, which exhibit these properties. GNNs, on the other hand, are designed to work with typical characteristics of graph structure, such as their complex topology and indeterminate size. GNNs are flexible enough to solve different classes of graph tasks, i.e. node level tasks such as node classification, edge level tasks such as link prediction and recommendation, and graph or subgraph level tasks such as finding graph isomorphism, etc. GNNs thus provide an efficient and scalable way to do deep learning against graph structured data and solve novel problems. In this tutorial, we will introduce GNN concepts and popular GNN architectures such as Graph Convolution Network (GCN), GraphSAGE, and Graph Attention Network (GAT), and describe how they can be used to solve different types of graph tasks. We will demonstrate examples of different types of GNN using Pytorch and Pytorch Geometric. Pytorch is a popular library for deep learning in Python, and Pytorch Geometric is a library for doing deep learning specifically on irregular data structures such as graphs.

KDnuggets(Press) covered our past events:

http://www.kdnuggets.com/2015/04/big-data-developer-conference-highlights-day1.html

http://www.kdnuggets.com/2015/04/big-data-developer-conference-highlights-day2.html

http://www.kdnuggets.com/2015/04/big-data-developer-conference-highlights-day3.html

 

Finally, don’t miss exploring the companies hand on workshop their tech and talent in the virtual Bootcamp: Amazon web services, Google, Deepsingularity LLC, Cloudera, Arrikto, Elsevier and More

Global Developer Virtual Bootcamp is happening on JAN 01 - MAR 31 2022. — and you still have time to join this world-class Devops-dive. Simply register here and group registrations get a big discount. Promo code (LIN)