Tag Archives: analytics

Predictive analytics can transform IT—from big data to print

Big data has become one of the go-to buzzwords for ambitious startups and breathless tech reporters, a stand-in for just about any type of intelligence-derived insight delivered by a computer. In reality, big data is an increasingly specialized and targeted part of the IT toolbox—not the cure-all it’s often pitched as but a vaccine to be deployed with specific goals in mind.

When used properly, big data analysis provides so much insight it earns another name entirely: predictive analytics. This is an even more specialized area, with potentially even greater returns—if you know how and where it works best.

Predict the unpredictable

Predictive analytics is the discipline of quantifying past behaviour to gain insight into the most likely dynamics of that same behaviour in the future. It begins with robust data collection, either by setting up your processes to harvest the customer, employee, product, or other information you need, or by simply purchasing it from a third-party specialist who’s collected the information on their own. Then, analysts use sophisticated mathematical models to find hidden trends within this data—and the effects of these analyses can prove enormous.

Take, for instance, the case of Orbitz.com, which found that users browsing their site on MacOS were 40 percent more likely to book a four- or five-star hotel than Windows users. This allowed Orbitz to tailor its search results based on the prediction that Mac users would be more interested in luxury hotels than their PC-using peers. This prediction lined up well with reality, and a rather simple change in a hotel suggestion algorithm drummed up significant new revenue and satisfied customers with shorter search times.

Everyone from insurance companies to Netflix and HR management firms are making use of big data to make smarter decisions that will improve their businesses.

Big data + automation = IT overpowered

Driving the potential of this phenomenon is the capacity to feed data into automated systems. When data mining is reliable, you can trust it to produce insights that guide algorithmic management processes. IT security, in particular, has never been the same since the advent of semi-automated tech that bases its security decisions on real-time analytics of network activity. However, the abilities of big data are now so widely useful they’re becoming indispensable even beyond highly technical disciplines, like cybersecurity.

A sufficiently empowered system of predictive analytics can notice complex trends in customer behaviour and quickly adjust suggested store content to match their changing tastes. Another system could look at the current level of scheduled overtime and automatically order a new batch of coffee beans for the office to fuel workers through a rough season. Some of the most innovative work in IT today has to do with the integration of predictive analytics and the office’s most old-school devices. Though it might seem incongruous to match such a bleeding-edge tool to legacy systems, the confluence of new and old can widen bottlenecks you never even knew where there.

Big data and printing are becoming inseparable

Modern analytics can start to alleviate some of the most intractable problems in IT—and when IT professionals think about intractable problems, there are few office devices that spring to mind more readily than the printer. Not only does it create new scheduling requirements for paper, toner, and more, but lower-end printers and print services often come with shoddy software or buggy network code that can lead to serial complaints and, thus, serial headaches.

That changes with devices that work with advanced, predictive systems integrating big data and printing. Any full-featured print service worth investing in now includes advanced predictive technology that can rapidly assess and repair issues as they arise, before they cause backups. This type of advanced tech can also notice conflicts between print tasks and scheduled (or easily predicted) downtime and patterns in connectivity issues that could result in lost data.

Even something as simple as a depleted toner cartridge can be a drain on the schedule. A system that can see such an event coming and automatically schedule the appropriate maintenance, however, eliminates that drain. With analytics, that type of service is taken out of your hands, allowing you to spend less time worrying about printers and more time worrying about the business.

Get back to the big picture

The term big data may get abused in the culture at large, but it has specific applications in the IT world—and many of these applications come in the form of predictive analytics. The predictions made possible by big data analytics can save time for existing IT workers, thus freeing them to focus on the innovative projects that had previously been too time-consuming to consider.

By integrating big data and predictive analytics with IT management, you can get back to the kind of big-picture, process-level improvements that were always supposed to define the impact of IT in the first place.

Rethinking cybersecurity architectures with adaptive security

Cybersecurity threats are evolving, becoming more complex and increasing in volume. That’s why leaders in the cybersecurity field are rethinking and enhancing their approach—and turning to adaptive security, specifically.

Up ’til now cybersecurity practise has focused heavily on preventing attacks. This “ring of iron” methodology may have been appropriate in the early days of network security, when users were less mobile and there wasn’t any need for outsiders to use an organization’s computing resources. Things are a bit more complicated today.

In modern computing environments, users move freely inside and outside the network perimeter, accessing IT resources using different devices. Customers, business partners, and third party service providers all access applications in enterprise data centres. Protecting just the edge of the network isn’t enough, because the seal on your infrastructure is far from airtight.

Go for a multi-layered approach

Adaptive security brings a more fluid, multi-layered approach to security. It complements attack prevention with three other components: detection, retrospection, and prediction. Detection seeks out attacks that have evaded preventative measures so that security experts can remove them before they do more damage. When these attacks are discovered, retrospective analyses enable security professionals to find out what happened and why, so they can take measures to avoid it happening again.

The predictive part of adaptive security architecture tries to work out what will happen next. It uses threat intelligence, drawing on external data about what’s happening on malicious criminal networks to build a picture of emerging attack patterns. It then prioritizes and addresses exposures automatically.

The key here is automation. As threats continue to grow, old-fashioned approaches to cybersecurity that rely on human analysis are becoming less effective. When Target was hacked in 2013, criminals were able to compromise its POS systems by gaining access through a third party HVAC service company’s account on Target’s system. The alert system flagged the threat, but human analysts simply didn’t see the threat for what it was. These components work in concert, automatically adapting themselves to new threats, prioritizing their protective measures as new threats emerge. The idea is to create a constantly flexing fabric of protection that blankets all parts of the organization’s infrastructure, and not just a hard, rigid shell on the outside.

Analyze user behaviour

Gartner argues that analytics lies at the heart of this enhanced security culture. After all, unless you can measure and analyze what’s happening in your IT architecture, it will be difficult to adapt dynamically to any threats facing it. Analytics must cover not just what’s happening at the network edge, but also in the network infrastructure, at the endpoint and at the application level. A mature security strategy will also encompass something else: people.

User behavioural analytics (UBA) promises to be a useful tool as part of this new approach to security architecture. If there’s one thing that changes inside an organization, it’s user behaviour. Still a new technology itself, UBA is designed primarily to protect organizations against insider threats. It works by establishing a baseline of what constitutes normal user behaviour, and then constantly measuring new activity against it. If user behaviour seems inconsistent with past activity, it can elevate a user’s risk profile.

At some point, risk may be surfaced to a human analyst who will then decide what to do. A UBA system typically takes logged IT events as its input, including endpoint activities and network access. More sophisticated UBA deployments are also integrated with other corporate resources, such as human resource systems and even physical building management systems, which can provide even more intelligence.

If a sales executive printed off several documents relating to large customers just after a negative performance review, a well-tuned UBA system may pick up on it. Similarly, if an administration assistant’s badge was logged visiting the R&D lab on company campus after hours, a UBA system may raise her risk score. User behaviour analytics is a young technology that will need a mature IT team to implement it. Integration between UBA systems and other elements of IT infrastructure is often ad hoc, and configuration of a UBA system can be a challenge, requiring in-depth understanding of how an organization operates.

Because a dynamic, multi-layered security architecture draws on so many aspects of a company’s operation, security teams must learn to work with people from a variety of disciplines, including enterprise architects, network technicians and even human resources and compliance experts. Attempt an adaptive security deployment only with support from these people and from senior management.

The journey to a new, more robust type of security platform may be challenging, but if implemented correctly it could lighten the load for human analysts and create a more dynamic, pervasive security culture that can mould itself to any emerging threat.

Got back (end)? 4 ways to improve your mobile UX

The most effective customer relationship services of today and tomorrow are probably served up through smartphones. As content offerings for marketing, customer service, and loyalty programs become digital, there’s a huge movement in board meetings to focus on the mobile customer experience. McKinsey & Company has found that digital customer care can offer huge improvements in satisfaction metrics, while driving huge costs savings for the enterprise.

But your app is only as good as your uptime. More demand for digitization means a heavier workload for IT to keep customers—both internal and external app users—satisfied. The world’s shiniest mobile interface won’t make up for constant latency issues or repeated 502 errors.

Let’s take a dive into the decidedly unglamorous side of digital customer experience management: optimizing your infrastructure.

Why your back end really matters

Canadians are some of the most active citizens online, with 39 hours per month spent on the web on a desktop or laptop. Average time increased to almost 75 hours a month, or two and a half hours a day, with streaming and mobile devices, according to measurement firm comScore. Time spent engaged with streaming content has doubled during the past six years. While the explosive adoption of streaming video, media, and other forms of content is good news for your digitized brand experiences, it’s also a risk if IT departments can’t keep up with demand.

When latency or connectivity issues occur, your employees and customers could revolt. Recently, Slack was hit by a multi-hour outage caused by a third-party network failure, resulting in a panicked user base that took to social media to voice their contempt. Persistent connectivity issues with Instagram in April 2016 also resulted in social media outcry among their dedicated user base.

According to TechCrunch, a Compuware survey revealed that users expect an app to load in a minuscule two seconds, and only 16 percent of users said they would give an app more than two attempts if they experience latency or connectivity issues. Still not scared? Amazon could stand to lose $1.6 billion dollars in sales per year from a second of delay, according to Fast Company.

Your employees may not have the option to drop a company app, but they’re likely to get frustrated over productivity lapses. For IT pros, mobile customer experiences are more than just the user experience and quality of content available. It’s also a matter of consistency through smarter infrastructure.

Is your mobile infrastructure customer-ready?

There’s more to an adequate mobile infrastructure than basic components—it’s also about the quality and planning. Until 2014, Facebook’s company motto was “Move fast and break things.” In light of 500 billion API calls each day, they’ve now switched to “Move fast with stable infrastructure,” which Facebook CEO Mark Zuckerberg acknowledges is less catchy but more sensible.

This approach has lessons for brands who lack their billion person-strong user base. Here’s how to nail it:

1. Strive for elasticity

When demand is unpredictable, IT may need to explore options for elasticity, defined by VentureBeat as the “ability to expand capacity as needed in an instantaneous and ideally automated way.” While this option has the potential to eliminate the latency and connectivity issues associated with static virtual machines, it includes some implementation challenges. IT pros must explore effective, policy-based communication between containers and related security concerns.

2. Fortify your VPN

If your internal or external app relies on proprietary or sensitive data for personalization, establishing a virtual private network (VPN) is needed to secure data communications. Ensuring your mobile app is connected to a tough-as-nails VPN can prevent interception of data. With stronger user authentication, you can serve up highly personalized customer experiences with minimized risk of data breach.

3. Don’t plan for desktop usage habits

When it comes to an initial app launch, there’s danger in trying to translate desktop usage habits to mobile devices. Mobile engineering expert Farhan Thawar is a firm believer that analytics from web apps aren’t always a great indicator of how people will use a mobile app. Case in point: you may check your bank’s app once weekly, but you’re likely to open the mobile app several times per day.

In tech, there’s a credo that it’s easier to fix a car before it’s driving. Planning for mobile-specific usage based on existing analytics can allow you to better plan containerization, demand, and other factors prior to launch.

4. Track and improve

While mobile infrastructure analytics may not be as sexy as user metrics, they’re critically important to the continued success of your employee and customer experience. Looking beyond stats on adoption and page flow can enable IT to develop an end-to-end understanding of how their apps are working. IBM’s Luba Cherbakov cites the following benefits of an analytics-driven approach to mobile app infrastructure management:

  • Find opportunities for infrastructure optimization.
  • Justify and make better infrastructure investments.
  • Simplify knowledge of app performance issues.

If your back end infrastructure isn’t up to the job, your customers may opt to delete your apps and never re-download them, without even giving their features or benefits a chance. A positive mobile customer experience requires stability and consistency in content delivery. Without the behind-the-scenes effort of IT pros, the digitization of customer relationships is impossible.

3 ways small and midsize companies can win at big data

Big data is just getting started, and small companies are taking note on how major organizations are using big data and analytics to inform business decisions, up the user experience, and barrel toward disruption.

But what if you’re not working for an enterprise? Does successful analytics require a chief data science officer and a massive budget? Turns out, analytics is more affordable and accessible than ever.

Mikkel Krenchel and Christian Madsbjerg of Wired say that even small companies share in the pursuit of understanding—since knowing your customer is essential to businesses of all sizes. High-profile project failures and successes at tiny startups are evidence that funding isn’t everything.

Common data project failures

In Analytics magazine, Haluk Demirkan and Bulent Dal outlined a number of common failure factors behind analytics projects gone awry, none of which have anything to do with the size of a company’s annual revenue. Common mistakes teams make include:

  • Data quality and reliability issues
  • Lack of vision and strategy
  • Missing architecture
  • Choosing the wrong vendors or products

While these factors can cause failure with any IT project, what’s the secret sauce for success with analytics? Ernst & Young research points to the human factor. The top 10 percent of enterprises blur the lines between big data and business, use analytics to make decisions either all or most of the time, and report a significant shift in their company’s ability to meet competitive challenges.

In other words, successful business intelligence isn’t about the software or even the volume of data you have on hand. It’s about your IT team’s ability to shift your culture to a focus on big data.

Transforming big data into big intelligence

The right approach to analytics can look dramatically different between small and midsize organizations. What’s right for an e-commerce company might be completely wrong for an app brand. However, there are a few common success factors, regardless of your mission or goals.

1. Understand thick data

Thick data is the ticket to true intelligence. In the Wall Street Journal, Madsbjerg and Mikkel Rasmussen write that an over-reliance on numbers, graphs, and factoids can miss the point.

Thick data is a reference to the “emotional, even visceral context in which people encounter [your] product or service.” This means looking beyond analytical concepts like probability and affinity to understand human factors like motivation and emotion. Companies that can crack the code on their customers’ “why” have the potential for leading customer service and a winning strategy.

2. Explore BDaaS

Too little infrastructure? No problem. Forbes‘ Howard Baldwin highlights the rise in big data as a service (BDaaS) as a real solution for companies with too few assets, too little budget, or not enough infrastructure to otherwise hack it. “This can range from the supply of data, to the supply of analytical tools with which to interrogate the data (often through a web dashboard or control panel), to carrying out the actual analysis and providing reports,” he writes.

Not only can organizations purchase on-demand, monthly subscriptions to sophisticated tools, they can also tap into vast supplies of open-source intelligence (OSINT) or consumer data from vendors. In 2016, there’s simply no need for multiple data centres to do analytics.

3. Get innovative about talent

Is your small or midsize business struggling to source sophisticated analytical talent? It’s a challenge common even among enterprises, caused most often by a skill-set gap.

Carvana—a digital used-car marketplace—turned to Kaggle, a company that gamified data analysis, to source solutions for their business. By crowdsourcing a car-selection methodology and gaining better insights into their target markets, Carvana achieved meaningful results and lowered risk. Organizations of any size that lack the internal resources to custom-engineer solutions could benefit from a similarly bold approach.

Success at analytics doesn’t require expensive software investments, an on-staff team with brains more suitable for NASA, or any other factors that can differentiate huge organizations from their smaller counterparts. Putting data science to work requires a shift in mind-set—and a little ingenuity in sourcing technology, talent, and assets.