Review: DB Networks Enhances Database Security with Machine Learning

Protecting databases takes more than just securing the perimeter, it also takes a deep understanding of how users and applications interact with databases, as well as knowing what databases are alive and breathing on the network. DB Networks aims to provide the intelligence, analytics and tools to bring insight into the database equation.

It’s no secret that database intrusions are on the rise, much to the chagrin of those responsible for infosec.  While many have focused on the notions of protecting the edge of the network and wrapping additional security around user access, the simple fact of the matter is that databases are the primary storehouses of private and sensitive information, and are often the true targets of intruders.

Recent events, such as the Target breach, the theft of security clearance information from the US OPM (Office of Personnel Management) and the theft of medical records from Anthem Healthcare, illustrates that protecting sensitive data is quickly becoming a losing battle. DB Networks is taking steps to turn the tide and bring victory to those charged with protecting databases.

The San Diego based company offers their DBN-6300 appliance and its virtual cousin, the DBN-6300v as founts of database activity, analytics, and discovery to give today’s security professionals an edge in the ever growing cyberattacks that are targeting databases. Those products promise to equip security professionals and database administrators with the tools that can identify and mitigate breaches before irreparable damage is done.

Case in point is the ubiquitous sql injection attack, which is far more common than most will admit to. SQL injection attacks have been around for more than ten years, and security professionals are more than capable of protecting against them. However, according to Neira Jones, the former head of payment security for Barclaycard, some 97 percent of data breaches worldwide are still due to an SQL injection somewhere along the line.

Taking a Closer Look at DBNetworks IDS-6300:

I recently had a chance to put DBNetworks IDS-6300 through its paces at the company’s San Diego Offices. The IDS-6300 is a physical appliance, built on Intel Hardware as a 2U rack mountable server. The device features four 10/100/1000 Ethernet Ports for data capture, one 10/100/1000 Ethernet admin port and one 10/100/1000 Ethernet customer service port, as well as a 480Gb SSD and 2Tb archival storage.

The device can be deployed by plugging it into either a span port or a tap port located at the core switch in front of the database servers. The idea is to place the device, logically ahead of the database servers, yet behind the application servers, so it can focus on SQL traffic. The IDS-6300 is managed via a browser based interface and supports the Chrome, Firefox and Safari browsers and will fully support IE in the near future.

I tested the device in a mock operational environment that included MS-SQL Databases with a demo version of a banking application that incorporated some known vulnerabilities. Setting up the device entailed little more than defining the capture ports and some very basic post installation items. Once configured to capture data, the next step was to identify databases.

Here, the IDS-6300 does an admirable job; it is able to automatically discover any databases that experience any traffic, even simple communications, such as a basic SQL statement. The device monitors for traffic 24/7 and continually checks for database activity.

That proves to be a critical element in the quest for securing databases – according to company representatives, many customers have discovered databases that IT was unaware operating in production environments. What’s more, the database discovery capability can be used to identify rogue databases or databases that were never shutdown after a project completed.

The database discovery information offers administrators real insight into what exactly is operating on the network, and what is vulnerable to attack – knowing that information can be the first step in mitigating security problems, before even venturing into traffic analysis and detection.

Never the less, the product’s real power comes into play when detecting SQL injection attacks. Instead of using caned templates or signatures, the IDS-6300 takes SQL attack detection to the next level – the device is able to learn what normal traffic is and record/analyze what that traffic accomplishes, and then builds a behavioral model.

Simply put, the device learns how an application communicates with a database, that information is used to create a behavioral model. Once learning is completed, the device uses multiple detection techniques to validate future SQL statements against expected behavior.  In practice, behavioral analysis proves immune to zero day attacks, newly scripted attacks and even old, recycled attacks, because all of those attacks fall out of the norms of expected behavior.

That behavioral analysis eliminates the need for signatures, black lists, white lists and other technologies that rely on pattern matching or static detection, which in turn reduces operational overhead and maintenance chores, almost converting SQL Injection attack monitoring into a plug and play paradigm.

When SQL Injection attacks occur, the IDS-6300 captures all of the traffic and transaction information around that attack. What’s more, the device categorizes, analyzes and presents the critical information about the attack so that administrators (or application engineers) can modify database code or incorporate firewall rules very quickly to remediate the problem.

Which brings up another interesting point, the IDS-6300 proves to be a good candidate for helping organizations improve application code. With many businesses turning to outsourcing and/or modifying off the shelf/open source software for application development, situations may arise where due diligence is not fully implemented and agile development projects may lead to introducing security flaws into application code.  That is not an uncommon problem,  at least according to Forrester Research’s Manatosh Das –  Poor application coding persists despite lessons learned.  Das claims that more than two-thirds of applications have cross-site scripting vulnerabilities, nearly half fail to validate input strings thoroughly, and nearly one-third can fall foul of SQL injection. Das adds security professionals and software engineers have known about these types of flaws for years, but they continue to show up repeatedly in new software code.

The IDS-6300 will quickly detect those newly introduced flaws and prevent poor programing practices from creating vulnerabilities, and then provide the information that is needed to fix those flaws.

The IDS-6300 offers another advantage to customers; it can help customers to consolidate databases by identifying what databases are active and what they are used for. That in turn can lead to companies combining databases and significantly reducing licensing and support costs. DBNetworks reports that one of their customers were able to reduce database licensing costs by over $1,000,000 by detecting and consolidating databases that were discovered by the IDS-6300

The IDS-6300 starts at $25,000 and is available directly from DBNetworks and authorized partners. For more information, please visit



Article source:

Performance Management Brings New Found Value to IT

IT departments are always struggling to garner the praise they deserve. Yet, most organizations look upon IT as a necessary evil, one that is both expensive and somewhat obstructionist. However, nothing could be further from the truth, and IT departments the world over have pursued ideologies that highlight the value of the services they offer, while also demonstrating the importance that a properly executed IT management plan brings to the bottom line.

At last weeks Riverbed Disrupt event, GigaOM had a chance to talk with CIOs, as well as network managers that have demonstrated the value of IT with application performance management platforms and services.

John Green, Chief Information Officer at Baker Donelson, the 64th largest law firm in the country, offered some real world examples of how Application Performance Management (APM) and end user monitoring bring demonstrable value to an organizations IT department.

Green said “my staff supports some 275 different applications and more than 40 video conferencing rooms, which are in near constant operation.” Simply put, Green has come to know the importance of how reliable service and end acceptable user experience impacts the view that the firm’s 1,500 employees have of the IT department.

Green said “I was deploying the best technology money could buy, but my end-users still weren’t happy.” Green was looking at a situation where unhappy end users could create dire circumstances, which could impact the firms bottom line. Green added “I could go to management meetings and offer proof that the networks were up 99.9% of the time, and the that the databases and the email servers were delivering five-nine statistics of operation. Yet, my end users were still complaining.”

That is when Green had an epiphany, one that amounted to realizing network performance statistics and end user expectations rarely do not go hand in hand. Green said “We needed the ability to track the actual end-user experience, and then use that information to meet user expectations.”

Green found those much desired capabilities with SteelCentral Aternity, a product that offers the ability to monitor any application on any device to provide the actual user perspective, at least when it comes to responsiveness and performance. Green said “I have been an Aternity user for about seven years, and it completely transformed the way we relate to our end users.”

Nonetheless, Green said “Aternity is only one part the puzzle, although it provides valuable information, I would like to see the whole performance and experience picture on one pane of glass.”

That was a need that brought Green to the Riverbed Disrupt event. Riverbed recently purchased Aternity and is integrating the technology into their SteelCentral product line, looking to give its customers that single pane of glass view. Green was impressed with the direction Riverbed is taking with end-to-end monitoring and offered ““With the Riverbed and Aternity combination, there is now a mix of tools, that when combined into a single pane of glass, gives you total visibility across your network, from the servers to the circuits.”

While the Riverbed event was about new technologies, the real message was that by providing full monitoring capabilities to IT, staffers can better serve end-users and demonstrate the value of effective IT.




Article source:

Riverbed Demonstrates the Importance of Full Stack Monitoring

Complete end to end monitoring has become increasingly important as enterprises strive to move from legacy data centers to the promise of software defined environments. After all, network managers encumbered by missing pieces of the network connectivity puzzle are likely to fail the transition to software defined solutions. An observation made abundantly clear at Riverbed’s Disrupt Event held in Manhattan last week. Overcoming the obstacles of connectivity has become Riverbed’s clarion call, and the company is now offering comprehensive solutions that not only ease the transition to software defined solutions, but also bring much more control and information to the network management realm.

Case in point is the company’s move to products that embrace the ideologies of a Software Defined Wide Area Network (SD-WAN), such as the company’s SteelConnect 2.0, an application-defined SD-WAN solution. In an interview with GigaOM, Joshua Dobies, vice president of product marketing at Riverbed, said “the new capabilities offered allow branch offices to directly access the cloud, all without having to backhaul everything back to the data center.” Dobies added “SD-WAN paves the way for complete digital transformation, allowing enterprises to quickly access the benefits of the cloud, while not discarding their existing investments in Data Center Technologies.”

Of course, the wholesale movement to the cloud means that technologies must transition to platforms that enable transformation, without incurring disruption. A situation that proves to be the sweet spot for end to end monitoring. With the addition of full network visibility, along with end user experience monitoring, network managers now have the ability to identify connectivity and performance problems on the fly, and can quickly address those problems with policies and tuning.

With the introduction of Riverbed’s next version of its SD-WAN offering, SteelConnect 2.0, the company is giving its customers greater visibility throughout the network, thanks to integration with Riverbed’s SteelCentral, it’s end-to-end performance management platform, and SteelHead products and Riverbed’s Interceptor offering, which gives SteelConnect greater scale for dealing with larger enterprise deployments. Riverbed Chairman and CEO Jerry Kennelly said “Today, we’re delivering a software-defined architecture for a software-defined world, and expanding that infrastructure deeper into the cloud and more broadly across all end users.”

In addition to the new SteelConnect 2.0 release, SteelCentral, it’s end-to-end performance management platform will now incorporate technology from Aternity, which Riverbed acquired in July. Aternity brings the ability to monitor application performance on physical and mobile end-user devices to the SteelCentral product line. The addition of the Aternity technology and extending visibility into the end-user devices give Riverbed a full portfolio of management offerings, according to Nik Koutsoukos, vice president of product marketing at Riverbed. “This brings full end-to-end management capabilities to those who need it most” Koutsoukos told GigaOM.


Article source:

Survey Reveals InfoSec is Doing it all Wrong!

While, “doing it all wrong” may be an exaggeration, no one can deny the fact that breaches are on the rise, and IT security solutions seem to be falling behind the attack curve. Yet, those looking to place blame may need only look in the mirror. At least that what a survey from cyber security vendor BeyondTrust is indicating.

BeyondTrust surveyed Over 500 senior IT, IS, legal and compliance experts about their privileged access management practices. The survey revealed some interesting trends, some of which should fall under the banner of “they should know better”. For example, only 14 percent regularly cycle their passwords, meaning that 86 percent of those surveyed are avoiding one of the top best practices for password and credential management. Adding insult to injury, only 3 percent of those surveyed monitor systems in real-time and have the capability to terminate a live session that may be indicative of a breach.

Simply put, the survey indicates that the majority of organizations need to do much more to protect systems from breaches. Many of which, could be easily avoided if the proper policies are put into effect. That said, the survey also revealed that 52 percent of respondents are not doing enough about known risks. In other words, they understand what the risks are, but have not deployed the technologies or crafted the policies to mitigate those risks.

Mitigating those risks should be one of the top jobs of InfoSec today, especially since most of the identified risks can be quickly resolved, using off the shelf products and by just applying best practices. BeyondTrust has developed some recommendations that InfoSec professionals can take to heart to lower risk and harden systems from breaches.

Those recommendations include:

  • Be granular: Implement granular least privilege policies to balance security with productivity. Elevate applications, not users.
  • Know the risk: Use vulnerability assessments to achieve a holistic view of privileged security. Never elevate an application’s privileges without knowing if there are known vulnerabilities.
  • Augment technology with process: Reinforce enterprise password hygiene with policy and an overall solution. As the first line of defense, establish a policy that requires regular password rotation and centralizes the credential management process.
  • Take immediate action: Improve real-time monitoring of privileged sessions. Real-time monitoring and termination capabilities are vital to mitigating a data breach as it happens, rather than simply investigating after the incident.
  • Close the gap: Integrate solutions across deployments to reduce cost and complexity, and improve results. Avoid point products that don’t scale. Look for broad solutions that span multiple environments and integrate with other security systems, leaving fewer gaps.


In an interview with GigaOM, Kevin Hickey, President and CEO at BeyondTrust, offered “Companies that employ best practices and use practical solutions to restrict access and monitor conditions are far better equipped to handle today’s threat landscape.”

Hickey added “The survey proved critical for helping BeyondTrust to better identify threats based upon privilege management, and also helped us evolve our product offerings to make privilege management a much easier process for security professionals.”

Hickey’s statements were validated by the launch of some new product offerings, which are aimed at bringing privilege management ease to those charged with IT security. The two new offerings are the BeyondTrust Managed Service Provider (MSP) Program and an Amazon Machine Image (AMI) of BeyondInsight available on the Amazon Marketplace. Those products are geared to prevent breaches that involve privileged credentials with deployments that include on premise solutions, virtual device solutions, as well as in the Cloud or from a Managed Services Provider.

Article source:

Hyper Convergence Poses Unique Challenges for SAN Technologies

With the move towards hyper-convergence in full swing, many organizations are faced with the challenge of moving their massive data stores into virtualized environments.  A situation that came to the forefront of discussion at VMworld 2016, where all things related to hyper-convergence were discussed ad nauseam.

Even so, many were still left wondering if it was even possible to have traditional storage technologies, such as SAN and NAS, effectively coexist in an environment that was transitioning into a hyper-converged entity. What’s more, the uncertainties of transition, driven by potential communications problems, performance issues and incompatibilities could force wholesale, expensive upgrades to support the move to hyper-convergence. An issue many network managers and CIOs would love to avoid.

Simply put, the move towards hyper-convergence, which promises improved efficiencies and reduced operating expenses, can be derailed by the high costs of transitioning to virtualized SANs. An irony worth noting. Never the less, those challenges have not stopped VMware Virtual SAN from becoming the fastest growing hyper-converged solution with over 3,000 customers to date. That said, there is still room for improvement, such as helping VMware Virtual SAN support even more workloads, and that is exactly where vendor Primary Data comes into play.

At VMworld 2016, Primary Data announced the availability of the company’s DataSphere platform, which brings a storage agnostic platform to virtualized environments. In other words, Primary Data is able to tear down storage silos, without actually disrupting the configuration of those silos. It accomplishes that by creating a virtualization platform that is able to mask the individual storage silos and present them as a unified, tiered storage lake, which is driven by policies and offers almost infinite configuration options.

Abstracting data from storage hardware is not a new idea. However, Primary Data goes far beyond what companies such as FalconStore and StoneFly bring to the world of hyper-convergence.  For example, DataSphere offers a single plane of glass management console, which unifies the management of across the various storage tiers, regardless of the storage type. What’s more, the platform goes beyond the concept of a SLA (Service Level Agreement) and introduces a new concept, aptly abbreviate as SLO (Service Level Objective). Primary Data’s Kaycee Lai, an executive with the company, explained to GigaOM that “SLOs are business objectives for applications. They define a commitment to maintain a particular state of the service in a given period. For example, specific write IOPS, read IOPS, latency, and so forth, to maintain for each application. SLOs are measurable characteristics of the SLA.”

Lai added “DataSphere will support DAS, NAS, and Object as storage types. Block level support for SAN will follow in the next release.” One of the key elements offered by the platform is the ability to work with storage tiers, without the disruption of having to rebuild storage silos. Lai added “Tiers are a logical concept in DataSphere. Tiers are simply a class of storage that is mapped to a particular SLO. The notion of having multiple tiers is not as important as having multiple objectives requiring the specific storage to meet those objectives. Customers can create as many objectives as their business requires.”

In the quest to make hyper-convergence common place, Primary Data smooths the bumpy storage path with several abilities, which the company identifies as:

  • Adapt to continually changing business objectives with intelligent data mobility.
  • Scale performance and capacity linearly and limitlessly with unique out-of-band architecture.
  • Reduce costs through increased resource utilization and simplified operations.
  • Simplify management through global and automated policies.
  • Accelerate upgrades of new solutions such as VMware vSphere 6 with seamless migration using existing infrastructure.
  • Reduce application downtime with automated non-disruptive movement of data.
  • Deliver a full range of data services across all applications in the data center.



Article source:

Announcing the Full Keynote Panelist Lineup at Gigaom Change

Gigaom Change 2016 Leader’s Summit is just one week away, September 21-23 in Austin. The event will take place over two and a half days of keynote panels with a lineup of speakers that are visionaries making RD and proof of concept strategic investments to bring concept to reality, forging multi-billion dollar companies along the way.

Three top industry experts in the following industries will highlight the current impact these innovations are having, then pivot toward what will be possible in the future: Robotics, AI, AR/VR/MR, Human-Machine Interface, Cybersecurity, Nanotechnology and 3D+ Printing.

Keynote panelists include leading theorists and visionaries like Robert Metcalfe, Professor of Innovation, Murchison Fellow of Free Enterprise at the University of Texas; Rob High, IBM Fellow, Vice President and CTO, IBM Watson. It also includes practitioners who are actively implementing these technologies within companies; like Shane Wall, CTO and Global Head HP Labs; Melonee Wise, CEO Fetch Robotics; Stan Deans, President of UPS Global Logistics and Distribution; and Rohit Prasad, Vice President and Head Scientist, Amazon Alexa. We will hear from Sapient about AI, IBM about nanotech, Softbank about robots and a wide range of other innovators creating solutions for visionary enterprises.

We couldn’t be more excited to introduce you to the full lineup of this extraordinary group.

Robert MetcalfeOur opening night keynote speaker will be internet/ethernet pioneer Robert Metcalfe, Professor of Innovation, Murchison Fellow of Free Enterprise at The University of Texas.

Jacquelyn Ford Morie Ph.D.Speaking on the VR/AR/MR panel is Jacquelyn Ford Morie Ph.D., Founder and CEO of All These Worlds LLC and Founder CTO of The Augmented Traveler Corp. Dr. Jacquelyn Ford Morie is widely known for using technology such as Virtual Reality to deliver meaningful experiences that enrich people’s lives.

Rodolphe GelinDiscussing the subject of robotics is Rodolphe Gelin, EVP Chief Scientific Officer, SoftBank Robotics. Gelin has worked for decades in the field of robotics, focusing primarily on developing mobile robots for service applications to aid the disabled and elderly. He heads the Romeo2 project to create a humanoid personal assistant and companion robot.

Manoj SaxenaOn the artificial intelligence panel, Manoj Saxena, Executive Chairman of CognitiveScale and a founding managing director of The Entrepreneurs’ Fund IV, a $100m seed fund, will address the cognitive computing space.

Dr. Heike RielSpeaking on the subject of nanotechnology is Dr. Heike Riel, IBM Fellow Director Physical Sciences Department, IBM Research. Dr. Riel’s work focuses on advancing the frontiers of information technology through the physical sciences.

Mark RolstonAddressing human-machine interface is Mark Rolston, Cofounder Chief Creative Officer, argodesign. Mark Rolston is a renowned designer who focuses on groundbreaking user experiences and addresses the modern challenge of design beyond the visible artifact – in the realm of behavior, the interaction between human and machine, and other unseen elements.

Rob HighDiscussing the subject of artificial intelligence is Rob High, IBM Fellow, Vice President and Chief Technology Officer of IBM Watson. Rob High has overall responsibility to drive Watson technical strategy and thought leadership.

Dr. Michael EdlemanAddressing nanotechnology is Dr. Michael Edelman, Chief Executive Officer of Nanoco. Through his work with Nanoco, Dr. Edelman and his team have developed an innovative technology platform using quantum dots that are set to transform lighting, bio-imaging, and much more.

Melonee WiseAs CEO of Fetch Robotics — delivering advanced robots for the logistics industry — Melonee Wise will speak to the state of robotics today and the need and potential for the entire industry to transform to meet demand for faster, more personalized logisitics/ops delivery using “collaborative robotics”.

Shane WallAs Chief Technology Officer and Global Head of HP Labs, Shane Wall drives the company’s technology vision and strategy, new business incubation and the overall technical and innovation community. Joining our 3D+ Printing panel, Wall will provide real insights into how 3D+ printing is going to transform and disrupt manufacturing, supply chains, even whole economies.

David RoseTaking a place on the Human-Machine interface panel is David Rose, an award-winning entrepreneur, author, and instructor at the MIT Media Lab. His research focuses on making the physical environment an interface to digital information.

Stan DeansJoining the 3D+ Printing panel is Stan Deans, President of UPS Global Logistics and Distribution. Deans has been instrumental in building UPS’s relationship with Fast Radius by implementing its On Demand Production Platform™ and 3D Printing factory in UPS’s Louisville-based logistics campus. By building this disruptive technology into its supply chain models, UPS is now able to bring new value to manufacturing customers of all sizes.

Rohit PrasadAddressing human-machine interface is Rohit Prasad, Vice President and Head Scientist, Amazon Alexa, where he leads research and development in speech recognition, natural language understanding, and machine learning technologies to enhance customer interactions with Amazon’s products and services.

Liam QuinnJoining our AR/VR/MR panel, Liam Quinn is VP, Senior Fellow CTO for Dell, responsible for leading the development of the overall technology strategy. Key passions are xReality where Quinn drives the development and integration of specific applications across AR VR experiences, as well as remote maintenance, gaming and 3D applications.

Niloofar RaziNiloofar Razi is SVP Worldwide Chief Strategy Officer for RSA. As part of the Cybersecurity panel she brings more than 25 years experience in the technology and national security sectors, leading corporate development and implementation of investment strategies for billion dollar industries.

Michael PetchMichael Petch is a renowned author analyst whose expertise in 3D+ printing will bring deep insights to advanced, additive manufacturing technologies on our Nanotechnology panel. He is a frequent keynote speaker on the economic and social implications of frontier technologies.

Josh SuttonJosh Sutton is Global Head, Data Artificial Intelligence for Publicis.Sapient. As part of the AI panel Josh will discuss how to leverage established and emerging artificial intelligence platforms to generate business insights, drive customer engagement, and accelerate business processes via advanced technologies.

Melissa MormanJoining our AR/VR/MR panel is Melissa Morman, Client Experience Officer, BuilderHomesite Inc. Morman is a member of the original founding executive team of BHI/BDX (Builders Digital Experience) and advises top executives in homebuilding, real estate, and building products industries on the digital transformation of their business.

John McClurgJoining our Cybersecurity panel is John McClurg, VP Ambassador-At-Large, Cylance. McClurg was recently voted one of America’s 25 most influential security professionals, sits on the FBI’s Domestic Security (DSAC) National Security Business Alliance Councils (NSBAC), and served as the founding Chairman of the International Security Foundation.

Mark HatfieldSpeaking on our Cybersecurity panel is Mark Hatfield, Founder and General Partner of Ten Eleven Ventures, the industry’s first venture capital fund that is focused solely on investing in digital security.

Mark HalversonSpeaking on our robotics panel is Mark Halverson, CEO of Precision Autonomy whose mission is to make unmanned and autonomous vehicles a safe reality. Precision Autonomy operates at the intersection of Artificial Intelligence and Robotics employing crowdsourcing and 3 dimensional augmented reality to allow UAVs and other unmanned vehicles to operate more autonomously.

James V HartSpecial guest James V Hart, is an award-winning and world-renowned Hollywood screenwriter whose film credits include Contact, Hook, Bram Stoker’s Dracula, Lara Croft: Tombraider, August Rush, Epic and many more projects in various stages of development, including Kurt Vonnegut’s AI fueled story Player Piano. With us he’ll discuss the impact of storytelling on how we’ve formed our views of the future.

Gigaom Change 2016 Leader’s Summit is just one week away, September 21-23 in Austin, but there are still a few tickets available for purchase. Reserve your seat today.

Article source:

Research Proves that a Customer Centric Approach Can Bring Unforeseen Value

Service management vendor, Servicenow recently commissioned Intergram Research to conduct a survey, which dispels some of the common myths around service enablement, a realization that Servicenow has long prophesied about. In an interview with GigaOM, Holly Simmons, Sr. Director, Global Product Marketing, Customer Service Management, said “the survey found that the companies that excel at customer service are 127% more likely to enable their customer service agents to enlist the help of different parts of the organization in real-time.”

Or more simply put, by transforming customer service into a team sport, organization can better meet the needs of their customers, in a much shorter time frame. However, that transformation requires more than just basic intention, it requires a platform that can tear down the silos that surround people and systems, which will ultimately deliver the ability to share resolutions and improve customer services across the whole services spectrum.

That ideology is backed by the findings of Intergram Research, which surveyed senior managers in customer service roles at 200 U.S. enterprises with at least 500 employees.

The Survey Results:

The survey revealed three characteristics that separate the companies with the very best customer service from those that struggle. Companies identified as top-tier are:

  • More collaborative. They are more likely to have enabled their customer service agents to engage the help of different parts of the organization when addressing a customer’s problem.
  • Better problem-solvers. Customer service leaders are also more likely to be able to resolve the root cause of a customer’s problem (a crucial component of closing the resolution gap).
  • Self-service providers. And finally, these top-tier organizations are more likely to offer self-service options for common requests, freeing them up to focus on more strategic issues.

While for some, the above may amount too little more than just common sense, the fact of the matter is that many organizations have created silos around their various customer service elements, which hampers collaboration and adds to the time it takes to solve a customer’s problems. What’s more, those silos add hidden expenses to already overtaxed support resources, meaning that the collective knowledge of customer support must be relearned during most any new interaction.

It is those inefficiencies that lead to customers fleeing from specific vendors, especially in the realm of IT. If a customer or client cannot get a quick resolution to a problem, then they may take their business else ware.

Simmons adds “Resolving a customer’s issue quickly and effectively requires real-time collaboration, coordination, and accountability among customer service, engineering, operations, field services and other departments. But that’s just not happening at more than half of the companies surveyed. Customer service still sits on an island without a bridge to other departments, partners, and customers. That slows the resolution process, and frustrates both customers and the agents trying to help them.”

The survey also illustrated the primary problems facing organizations seeking to improve customer service include the difficulty in connecting all service processes, further hampered by service departments being siloed, along with a lack of automation. Those three factors impacted more than 50% of those surveyed, and when viewed as single issues, proved to be a primary barrier to successfully customer service transformation.

Call to Action:

While the survey highlights the both the problems and solutions surrounding agile customer service, transformation can only take place if certain ideologies are upheld. According to Servicenow, organizations that treat customer service as a “team sport” and engage the right people from relevant departments to solve problems are in a better position to proactively address the underlying reasons for customer calls. They also empower their customers to quickly answer their own questions–through self-service portals, knowledge bases, and communities–further reducing the need to interact with customer service agents. The more sophisticated customer service organizations aspire to the ideal of “no-service” by combining these practices to help eliminate the reasons for customer calls in the first place.


Article source:

Fluke briefing report: Closing the gap between things and reality

The Internet of things is great, right? I refer the reader to the vast amount of positive literature that is washing through the blogosphere, no doubt being added to even as I write this. At the same time, plenty of people are pointing out the downsides — data security for example, more general surveillance issues or indeed the potential for any ‘smart’ object to be hacked.

All well and good, in other words it’s a typical day in techno-paradise. But the conversation itself is skewed towards the ability to smarten up — that is, deliver new generations of devices that have wireless sensors built in. What of the other objects that make up 98% (I estimate) of the world that we live in?

Enter companies such as Fluke, which earned its stripes over many years of delivering measurement kit to engineers and technicians, from multimeters to higher-end stuff such as thermal imaging and vibration testing. While such companies might not have a high profile outside of operational circles, they are recognising the rising tide of connectedness and doing something about it in their own domains.

In Fluke’s case, this means manufacturing plants, construction sites and other places where the term ‘rugged’ is a need to have, not a nice to have. Such sites have plenty of equipment that can’t simply be replaced with a smarter version, but which nonetheless can benefit substantially from remote measurement and management.

The current consequence, Fluke told me in a recent briefing about their let’s connect-the-world platform (snappily titled the “3500 FC Series Condition Monitoring System”), is that failures are captured after the event. “We have more than 100,000 pieces of equipment and the reliability team can only assess so many. We’ve never been able to have maintenance techs collect data for us, until now,” reports a maintenance supervisor at one US car manufacturer.

That Fluke are upbeat about the market opportunity nearly goes without saying — after all, there really is a vast pool of equipment that can seriously benefit from being joined up — but the point is, the model goes as wide as there are physical objects to manage. And equally there’s a ton of companies like Fluke that are smartening up their own domains, making a splash in their own jurisdictions. Zebra’s smart wine rack may just have been a proof of concept, but give it five years and all wine lovers will have one.

Inevitably, there will be a moment of shared epiphany when all such platforms start integrating together, coupled with some kind of Highlander-like fight as IoT integration and management platforms look to knock the rest out of the market. I’m reminded of the moment, back in the early 90’s, when telecoms manufacturers adopted the HP OpenView platform en masse, leading to possibly the dullest Interop Expo on record.

Yes, the future will be boring, as we default to using stuff that we can remotely monitor and control. As consumers we may still like using ‘dumb stuff’ but for businesses that interface with the physical world, to do so would make no commercial sense. Equally however, such a dull truth will provide a platform for new kinds of innovation.

I could postulate what these might be but the Law of Unexpected Consequences has the advantage. All I do know is, it won’t be long at all before what is seen as exceptional — the ability to monitor just about everything — will be accepted as the norm. At that point, and to make better use of one of Apple’s catchphrases, everything really will be different.

Article source:

Welcome to the Post-Email Enterprise: what Skype Teams means in a Slack-Leaning World

Work technology vendors very commonly — for decades — have suggested that their shiny brand-new tools will deliver us from the tyranny of email. Today, we hear it from all sorts of tool vendors:

  • work management tools, like Asana, Wrike, and Trello, built on the bones of task manager with a layer of social communications grafted on top
  • work media tools, like Yammer, Jive, and the as-yet-unreleased Facebook for Work, build on social networking model, to move communications out of email, they say
  • and most prominently, the newest wave of upstarts, the work chat cadre have arrived, led by Atlassian’s Hipchat, but most prominently by the mega-unicorn Slack, a company which has such a strong gravitational field that it seems to have sucked the entire work technology ecosystem into the black hole around its disarmingly simple model of chat rooms and flexible integration.

Has the millennium finally come? Will this newest paradigm for workgroup communications unseat email, the apparently undisruptable but deeply unlovable technology at the foundation of much enterprise and consumer communication?

Well, a new announcement hit my radar screen today, and I think that we may be at a turning point. In the words of Winston Churchill, in November 1942 after the Second Battle of El Alamein, when it seemed clear that the WWII allies would push Germany from North Africa,

Now this is not the end. It is not even the beginning of the end. But it is, perhaps, the end of the beginning.

And what is this news that suggests to me we may be on the downslope in the century-long reign of email?

Microsoft is apparently working on a response to Slack, six months after the widely reported termination of discussions of acquisition. There has been a great deal of speculation about Microsoft’s efforts in this area, especially considering the now-almost-forgotten acquisition of Yammer (see Why Yammer Deal Makes Sense, and it did make sense in 2012). However, after that acquisition, Microsoft — and especially Bill Gates, apparently — believed they would be better off building Slackish capabilities into an existing Microsoft brand. But, since Yammer is an unloved product inside of the company, now, the plan was to build these capabilities into something that the company has doubled down on. So now we see Slack Teams, coming soon.

Microsoft may be criticized for maybe attempting to squish too much into the Skype wrapper with Skype Teams, but we’ll have to see how it all works together. It is clear that integrated video conferencing is a key element of where work chat is headed, so Microsoft would have had to come up with that anyway. And Skype certainly has the rest of what is needed for an enterprise work chat platform, and hundreds of millions of email users currently on Exchange and Office 365.

The rest of the details will have to wait for actual hands on inspection (so far, I have had only a few confidential discussions with Microsofties), but an orderly plan for migration away from email-centric work technologies to a work chat-centric model coming from Microsoft means it’s now mainstream, not a bunch of bi-coastal technoids. This will be rolled out everywhere.

So, we are moving into a new territory, a time where work chat tools will become the super dominant workgroup communications platform of the next few decades. This means that the barriers to widespread adoption will have to be resolved, most notably, work chat interoperability.

Most folks don’t know the history of email well enough to recall that at one time email products did not interconnect: my company email could not send an email to your company email. However, the rise of the internet and creation of international email protocols led to a rapid transition, so that we could stop using Compuserve and AOL to communicate outside the company.

It was that interoperability that led to email’s dominance in work communications, and similarly, it will take interoperability of work chat to displace it.

In this way, in the not-too-distant future, my company could be using Slack while yours might be using Skype Teams. I could invite you and your team to coordinate work in a chat channel I’ve set up, and you would be able to interact with me and mine.

If the world of work technology is to avoid a collapse into a all-encompassing monopoly with Slack at the center of it, we have to imagine interoperability will emerge relatively quickly. Today’s crude integrations — where Zapier or IFTTT copy new posts in Hipchat to a corresponding channel in Slack — will quickly be replaced by protocols that all competitive solutions will offer. And Skype is that irritant that will motivate all these giants to make a small peace around interoperability, in order to be able to play nice with Slack.

We’ll have to see the specifics of Skype Teams, and where Facebook at Work is headed. Likewise, all internet giants — including Apple, Google, and Amazon — seem to be quietly consolidating their market advantages in file sync-and-share, cloud computing, social networks, and mobile devices. Will we see a Twitter for Work, for example, after a Google acquisition? Surely Google Inbox and Google+ aren’t the last work technologies that Alphabet intends for us? How might Slack fit into Amazon’s designs? That might surprise a lot of people.

But no matter the specifics, we are certainly on the downslopes of the supremacy of email. We may have to wait an additional 50 years for its last gasping breath, but we’re now clearly in the chat (and work chat) era of human communications, and there’s no turning back.

Article source:

Is There Life After Dell? SonicWALL Thinks So!

When SonicWALL was acquired by Dell back in 2012, many wondered how SonicWALL would fare under the auspices of industry giant Dell. That said, SonicWALL managed to maintain market share in its core SMB business sector, and start making inroads in to the large, distributed enterprise sector. Nonetheless, when Dell decided to sell off its software assets, along with SonicWALL to private equity firms, many began to wonder once again what that meant for SonicWALL.

SonicWALL provided the answers to those queries at the company’s PEAK 2016 event, which was held last week in Las Vegas. The primary topics of discussion focused on applying SonicWALL technology and what the future holds for SonicWall, its partners and customers.

Along with the requisite product announcements, SonicWALL also hosted several educational sessions bringing cloud security to the forefront of partners’ minds, as well as the challenges created by the ever growing IoT infrastructure spreading through enterprises today.

SonicWALL offered a strong message that there is life after Dell, and that the company will thrive and grow despite the forced separation from Dell. For example, SonicWALL is in the process of strengthening the company’s channel programs to better support both its partners and end customers. What’s more, the company also announced its Cloud GMS offering, which is aimed at simplifying management, enhancing reporting, and reducing overhead. What’s more, Cloud GMS brings cloud based management, patching and updating to the company’s army of partners, providing them with a critical weapon in the battle against hosted security vendors, and those plying “firewalls in the cloud” as a means to an end.

The importance of the forthcoming Cloud Global Management System (GMS) cannot be understated. SonicWALL aims to eliminate the financial, technical support and system maintenance hurdles that are normally associated with traditional firewalls, transforming what was once an isolated security solution into a cloud managed security platform. A capability that will prove important to both customers and partners.

For partners, Cloud GMS brings a unique, comprehensive, low cost monthly subscription to the table, which is prices out based upon the number of firewalls under management. That ideology will allow partners to become something akin to a hosted services security provider, shifting customer expenses to OpEx, instead of CapEx.

SonicWALL Cloud GMS solution Offers:

  • Governance: Establishes a cohesive approach to security management, reporting and analytics to simplify and unify network security defense programs through automated and correlated workflows to form a fully coordinated security governance, compliance and risk management strategy.
  • Compliance: Rapidly responds and fulfills specific compliance regulations for regulatory bodies and auditors with automatic PCI, HIPAA and SOX reports, customized by any combination of auditable data.
  • Risk Management: Provides ability to move fast and drive collaboration and communication across shared security framework, making quick security policy decisions based on time-critical and consolidated information for higher level security efficacy.
  • Firewall management: MSPs will be able to leverage efficient, centralized management of firewall security policies similar to on-premises GMS features, including customer sub-account creation and increased control of user type and access privilege settings.
  • Firewall reporting: Real-time and historical, per firewall, and aggregated reporting of firewall security, data and user events will give MSPs greater visibility, control and governance while maintaining the privacy and confidentiality of customer data.
  • Licensing management: Seamless integration between GMS and MySonicWALL interfaces will allow users to easily and simply log into Hosted GMS to organize user group names and memberships, device group names and memberships, as well as adding and renewing subscriptions and support.


Article source: