On April 10, 2020, Data-Science-Blog.Com published an interview with Gregory Blepp, CEO of NetDescribe, about data analytics for monitoring and optimizing IT networks. 

The interview was conducted by Benjamin Aunkofer, editor of the Data Science Blog. Benjamin Aunkofer is Lead Data Scientist at DATANOMIQ and university lecturer for Data Science and Data Strategy. He also works as Interim Head of Business Intelligence and gives workshops on BI, Data Science and Machine Learning for companies.

Interview Data-Science-Blog with Gregory Blepp, CEO at NetDescribe GmbH

Gregory Blepp is Managing Director of NetDescribe GmbH, based in Oberhaching in the south of Munich. He and his team of consultants, data scientists and IT network experts are involved in the technical analysis of IT networks and the automation of analysis via applications.

I  Introduction

Data Science Blog: Mr. Blepp, the name of your company, NetDescribe, actually describes what you stand for: the analysis of technical networks. Where does the need for this service arise and what solution do you have at hand?

Our customers must have near real-time visibility into the performance of their corporate IT. This includes the current status of the networks as well as other areas such as servers, applications, storage and of course the web infrastructure and security.

In the banking environment, for example, unrestricted WAN connections are absolutely critical for trading between international stock exchanges. For this purpose we offer StableNetⓇ from InfosimⓇ. This is a network management platform that monitors the status of the connections in real time. For the underlying network platform (router, switch, etc.) we consolidate the monitoring with GigamonⓇ.

For retail companies, the performance of the online shop platform is essential. In addition, there are high security requirements for the transmission of personal information and credit cards. We use SplunkⓇ for this purpose. This solution ideally combines general performance monitoring with a high degree of automation and offers essential support for security departments.

Data Science Blog: Are companies more concerned with the security aspects of a corporate network or does performance analysis for the purpose of optimization play a bigger role?

That depends on the current needs of the company.
For many of our customers, security aspects have been and continue to be the primary focus. In the course of the cooperation we can show how closely the individual departments are interlinked by establishing a consistent performance analysis. The higher visibility facilitates performance analyses. At the same time it provides the security department with important information about the current status of the infrastructure.

Data Science Blog: Are you dealing with Big Data – literally?

Talking about Big Data we distinguish between

  • the organic growth of corporate data based on established processes, including the offer of new services and
  • real Big Data, e. g. the connection of production processes to corporate IT, i. e. additional processes in companies initiated by digitalization.

Both topics are a great challenge for customers. On the one hand, the performance of the systems must be expanded and upgraded to cope with the additional data volumes. On the other hand, these new data only have real value if interpreted correctly and if results are consistently incorporated into the planning and control of the companies.

At NetDescribe, we are concerned with managing the growth and the adjustments it requires. In short: bringing order to the data chaos. We want to give IT managers, but also the entire organisation, a reliable indication of how the complete infrastructure is doing. This includes correlating the data across the individual areas, also known as silos, and presenting it in context.

II  Network Optimization Technologies

Data Science Blog: Log data analysis exists since log files exist. What’s stopping a BI team from opening a data lake and just getting started?

That’s absolutely right, log data analysis has always existed. It’s simply a matter of relevance. In the past, Wireshark was used to analyze a data set when necessary to identify and track a problem. Today, huge amounts of data (logs) are permanently recorded in the IoT environment to create analyses.

In my opinion, three major changes are the drivers for the widespread use of modern analysis tools.

  • The contents and correlations of log files, from almost all systems of the IT infrastructure, in almost real time and for large amounts of data, are only possible with the new technologies. This helps in times of digitalization, where up-to-date information takes on a whole new significance and leads to a major importance of IT.
  • An important aspect of recording and storing log files today is that I no longer have to define the search criteria in advance to get the answers from the data records. The new technologies  allow completely free query of information across all data.
  • In the past, log files were an auxiliary tool for specialists. The information presented in technical form helped to solve a problem – if you knew exactly what you were looking for. The current solutions are also equipped with a GUI, which is not only modern, but also individually adaptable and understandable for non-technicians. Thus, the circle of users of the “Logfile Manager” is expanding today from specialists in the security and infrastructure sector to departmental managers and employees up to the management.

The Data Lake was and is an essential component. If we look at technologies such as Apache/KafkaⓇ and, as a managed solution, Confluent for Apache/KafkaⓇ today, a central data hub is established. All IT departments benefit from this central data hub and all analysts access the same database with their tools. This means that the raw data is collected only once and made available to all tools equally.

Data Science Blog: This makes you a company that combines data analysis, visualization and monitoring, but also IT security. What is actually particularly important to companies in this respect?

Security is of course at the top of the list. Organizations are naturally very sensitive. Current media reports on topics such as cyber attacks, hacking etc. have a great impact and trigger actions. In addition, there are compliance requirements which, depending on the industry, are implemented faster and more uncompromisingly.

NetDescribe is specialised in looking at this topic with more foresight.

Of course the attack on the structure from outside, is considerable and IT security must provide the best possible protection. Firewalls, classic virus protection, etc. and technologies such as Extrahop, which contribute to the protection of the companies through consistent monitoring and updating of signatures, serve this purpose.

However, the integration of the subordinate structures like the network is just as important. An attack on an organization, no matter from where initiated, is always transported via a router that forwards the data set. No matter whether it comes from a cloud or traditional environment and no matter whether it is virtual or not. This is where NetDescribe comes in, using established technologies such as ‘flow’ with specially developed software modules. These so-called NetDescibe Apps forward these data sets to SplunkⓇ, StableNetⓇ. This results in a considerably extended analysis of threat scenarios, combined with the possibility to establish a company-wide optimization.

Data Science Blog: So you do not only analyze ad-hoc but also deal with the formulation of solutions as an application (app).

That’s right. All of the technologies we use have their main focus and in our opinion are leading in their fields. InfosimⓇ in the network, especially in connections, VIAVI in packet analysis and flows, SplunkⓇ in security and Confluent for Apache/KafkaⓇ as the central data hub. So every solution has its own right to exist in the organizations. For over a year NetDescribe has made it its business to combine these technologies to form a “stack”.

Gigaflow from VIAVI is probably the most scalable software solution for storing and analyzing network data in large quantities quickly and without loss. SplunkⓇ has meanwhile become a standard tool for data analysis and providing the visualization to a large auditorium.

NetDescribe has now introduced an app that delivers the correlated NetFlow data from VIAVI Gigaflow to SplunkⓇ. Queries on specific data sets can also be sent directly from SplunkⓇ to the VIAVI Gigaflow solution. The result is a significantly enhanced SplunkⓇ platform, namely the entire network at the touch of a button (!!!).
In addition, this connection saves SplunkⓇ resources.

Furthermore, there is now a NetDescribe StableNetⓇ app. Even more connections are in the planning stage.

The goal here is quite pragmatic – if SplunkⓇ establishes itself as the platform for security analyses and for the data framework in general in the companies, then we support this as NetDescribe. That means that we connect the other business-critical solutions of the departments to this platform or guarantee data integration. This is what our customers expect.

Data Science Blog: Which technologies do you rely on – in terms of software?

As just mentioned, SplunkⓇ is a platform that has established itself in most companies. We have been making SplunkⓇ for over 10 years now and are establishing the solution with our customers.

SplunkⓇ has the big advantage that our customers can start with a dedicated and manageable application, but the technology itself scales almost unlimited. This applies to security as well as infrastructure, application monitoring and development environments. The constantly growing requirements of our customers quickly lead to further discussions in order to develop enhanced application scenarios.

In addition to SplunkⓇ, we rely on StableNetⓇ from InfosimⓇ for network management. Also for over 10 years now. Here too, the manufacturer’s experience in the provider environment allows us to establish a highly scalable solution for our customers.

Confluent for Apache/KafkaⓇ is a comparatively recent solution. But it is currently receiving a great deal of attention in companies. The establishment of a central data hub for analysis, evaluations, etc., on which all performance data is made available centrally, will make it easier for administrators, but also for planners and analysts to provide meaningful data in the future. The combination of Open Source and managed solution exactly meets the customers’ objectives and apparently also the ravages of time. Comparable to the Linux derivatives of Red Hat Linux and SUSE.

I already mentioned VIAVI Gigaflow for network analysis. In the coming weeks, the new version of the VIAVI Apex software will establish scoring for networks. Imagine the MOS score of VoIP for enterprise networks. That’s what it’s about.
With this software even less specialized administrators get the possibility to make concrete statements about the condition of the network infrastructure or occurring problems with only 3 (!!!) mouse clicks. Is it the network? Is it the application? Is it the server? – that causes the problem. This is an essential containment of the current ping-pong between departments, from which we often only get the statement “we’re fine”.

Our software portfolio is rounded off by the SentinelOne solution for endpoint protection.

III  AI and the human factor for network analysis

Data Science Blog: To what extent does artificial intelligence (AI) or machine learning play a role?

Machine Learning already plays a very important role today. By consistently feeding in raw data and using specific algorithms, better analyses of the history and complex relationships can be prepared over time. In addition, the accuracy of forecasts for the future can be improved immensely.

A concrete example is the aforementioned Endpoint Protection of SentinelOne. By using AI to monitor and control access to any IoT device, SentinelOne enables machines to solve problems that could not previously be solved on a large scale.

This is also where our holistic approach comes into play, not just looking at individual areas of IT,  but at company-wide IT.

Data Science Blog: What kind of people do you work with in your team? Are they rather the introverted nerds and hackers or extroverted consultants? What makes you stand out as a team?

Nerds and hackers I would definitely not call our technical consulting staff.

Our consulting team currently consists of nine people. Each of them is a proven expert for specific products. Of course, we also have introverted colleagues who prefer to analyze a problem in seclusion before generating a solution. However, the majority of our technical colleagues are always in close consultation with the customer.

When working for the customer, it is very important that you are not only ahead of the field in terms of your technical skills. You also must have strong communication skills and must be extremely team-oriented. Quick adaptation to the different working environments and “colleagues” at the customer’s site is what distinguishes our people.

As a constantly available communication tool, we use an internal chat which is available to everyone at all times. Our consulting team is always in contact with their colleagues, even at the customer’s premises. This has the great advantage that the entire know-how is available “in the pool”.

In addition to the consultants, there is also our sales team with currently four employees. These colleagues are of course always under power, as is the custom for sales.

Dedicated PreSales Consultants are the technical spearhead for the recording and understanding of the customer requirements. A close cooperation with the actual consulting team is the precondition for the foresighted planning of all projects.

By the way, we are always looking for qualified colleagues (m/f/d). Your readers will find details of our job offers HERE on our website under the menu item “Career”.  We are pleased about every interested person.

Ⓒ Copyright – Data-Science-Blog.com

 

 

 

About NetDescribe GmbH

NetDescribe GmbH is headquartered in Oberhaching in the south of Munich. Trusted Performance by NetDescribe stands for fail-safe business processes and cloud applications. The power of NetDescribe is tailor-made technology stacks instead of off-the-shelf technology. The holistic portfolio offers data analysis, solution concepts, development, implementation and support. As a trusted advisor to corporations and public institutions, NetDescribe delivers highly scalable solutions with state-of-the-art technologies for real-time dynamic and transparent monitoring. This provides customers with insights into security, cloud, IoT and industry 4.0 at all times. They can make agile decisions, secure internal and external compliance and conduct efficient risk management.

Trusted Performance by NetDescribe.