You've likely heard that 90% of the world\u2019s data was created over the last two years.\u00a0 This phrase, often quoted, sometimes attributed, is passing through the public consciousness, on its way to becoming trivia.\u00a0 Before its reduced to a \u2018fun fact\u2019 I offer it as a cautionary tale.\u00a0 The author\u2019s favorite fictional detective once said \u201cThere is nothing more deceptive than an obvious fact.\u201d\u00a0 I invite the reader to turn this \u2018obvious fact\u2019 around and project it forward into the future.\u00a0 If true, the world\u2019s data will grow tenfold in the next two years, and a hundredfold in the next four.The reasons for the rise, and its continuation, should be self-evident.\u00a0 Data is perceived as an asset, and one that holds valuable information.\u00a0 Big data has led to the development of new tools and new fields of analytics (which in turn creates more data) from which ever more valuable information is being gleaned.\u00a0 From an economic point of view the big data market was worth $49 billion in 2019 and is expected to grow to $103 billion by 2023.\u00a0 Abstractly, one should expect market forces to drive the growth of any monetizable commodity.RisksThe rise of data poses several challenges for information security; here are just a few.Protecting the assetThe 'obvious fact\u2019 is the asset under protection is growing.\u00a0 In 2012, the world\u2019s data was expected to approach 40 zettabytes by 2020, and a recent study predicts 175 zettabytes by 2025, just five years hence.\u00a0 More to the point, the proportion of data requiring protection is growing faster than the digital landscape itself, from less than a third in 2010, to an estimated 40% by 2020.A number of factors contribute to the growth of raw data, from social media to digital transformation to innovation.\u00a0 For example, 3D radiology has increased file size by a factor of 20, over 2D radiology, and autonomous vehicles are expected to generate 3TB of data per hour, or just under one GB each second.\u00a0 Analytics, turning raw data into valuable information, is still in its infancy.\u00a0 Research suggests that only a tiny fraction of data is analyzed, but the big data growth figures (above) suggest explosive interest.The data under protection today is just the tip of the iceberg.\u00a0 In terms of raw data, security should partner with IT and understand the data storage, archival, and backup strategies with data\u2019s growth trajectory top of mind.\u00a0 Analytics will both increase data demand and generate even more information.\u00a0 Inputs are likely to include customer privacy and financial data, and results will be both sensitive and valuable.\u00a0 Analytical environments should be assessed and managed from the perspective of data risk.Data in motionData is on the move and that movement is expected to continue.\u00a0 A 2018 IDC White Paper on the topic describes data location in three broad categories.The Core: Once the exclusive province of the enterprise data center, the core is increasingly the cloud, whether public, private or hybrid). Predictions call for more data in the public cloud than in endpoints by 2020, and for more data in the public cloud than in traditional data centers by 2021.The Edge: Be it the branch, the retail outlet, or the geographically removed office, the edge is a location in transition. In some cases, virtualization is moving edge data back to the core.\u00a0 At the same time, the proliferation of embedded devices (cameras, POS terminals, payment systems, etc.) is generating more data at the edge than ever before.The Endpoint: Again, a blurred distinction, but upwards of 150 billion connected devices are expected by 2025, most of which will be generating data, and that in real time. The mobile device, it almost goes without saying, is the favored device for consumer generation and consumption of data (81% of Americans now own a smartphone), but this category also includes tablets, wearables, personal computers, and the internet of things \u2014 devices that may not store, or process, but certainly generate a great deal of data.Security should emphasize (i.e. recruit\/retain\/develop) application security expertise as business responds to the endpoint\u2019s significance as a business channel.\u00a0 Endpoint development is an area of compelling security challenges as development cycles are short and platform security controls cannot be assumed.\u00a0 Security should conduct aggressive risk assessments on the edge as the evolution towards greater services and faster response drives local analysis, requiring greater computing power and increased data retention.\u00a0As for the cloud\u2026Third partiesThe cloud, viewed simply, is just someone else\u2019s data center.\u00a0 Managing security in the cloud means managing risk in a third-party environment and leveraging the controls on offer.\u00a0 For security, the cloud is an exercise in third-party (risk) management and information security will need to develop a very active third-party management skillset.On the subject of third-parties, consider too that some of an organization\u2019s service providers hold data that is, or will be, of analytical interest.\u00a0 Service providers will be called on to:increase their own level of analysisprovide a greater level of access to data they holdmake that data available to their clientsData analyzed at the third-party will likely increase in value, necessitating enhanced controls.\u00a0 Greater access is, of course, an issue of identity and access management, while greater availability will mean increased data flows requiring a re-evaluation of connectivity controls.\u00a0 Security should have a care for each of these outcomes, and third-party data custodians in general.Complexity Data science describes big data characteristics with nouns starting with the letter V.\u00a0 The three most important are volume, variety, & velocity (but there are others) and collectively they describe the complexity of big data and how it differs from previous concepts of data management.Velocity: It is difficult to prioritize the security concerns around big data\u2019s characteristics, a case could be made for each of them, but I think the first must be velocity.There is a general data processing risk should the enterprise generate data faster than it can consume it - before it is archived or lost due to storage constraints (i.e., analysis gap).To information security, the risk is greater: Should analysis fall behind, indicators are produced too late for preventative action, or worse, too late for timely incident response.Variety: Data is being generated outside the traditional data center (at the edge and the endpoint) and from a new variety of sources. Unstructured data makes up 80% or more of enterprise data and is growing at a rate of 55% to 65% a year.\u00a0 Securing application data (as opposed to traditional transactional databases) will take on new importance.\u00a0 The attack surface is both growing and changing.Volume: The security concern here is straightforward: The raw asset under protection is growing.Suggestions & opportunitiesThe future\u2019s challenges, and the potential rewards, require mastering the data at our disposal. Just as business is leveraging analytics to create value, information security can, and must, do the same. Here are three ways security can change its relationship with data.Data scienceData science brings a new generation of analytical skills and technologies to add to the information security toolbox.\u00a0 Many are designed for use with very large data sets.\u00a0 A few examples:Data mining to simplify data sets and find patternsMachine learning to draw new insights from (very) large data setsPredictive analytics to prioritize, or enrich, security controlsA spectrum of technologies to offset the shortage of experience and qualified resources, from user-friendly, accessible programming languages and specialty code for statistics to the unrealized potential of artificial intelligenceThreat intelligenceThreat intelligence is a well-understood discipline, but tends to depend on security control data and commercial feeds.\u00a0 As data grows, each enterprise creates a treasure trove of data it could, and should, analyze.\u00a0 Security should expand the scope of threat intelligence beyond its own controls and examine all the data at its disposal, including user behavior, network data flows, and the business applications it protects.\u00a0 Threats, after all, can be anywhere.\u00a0Data protectionA data protection program should be judged by its strength and simplicity, not by its size and complexity.\u00a0 The ideal data protection strategy would have a single set of strong default controls (authentication, encryption, etc.) applied to all data, eliminating the need for classification and labels.\u00a0 Data governance would require only a lifecycle policy and a declassification (release) process.\u00a0 Employee training, project requirements, and IT operations would all be identical, and straightforward: If the data is here, its protected, no exceptions.\u00a0If this seems a trifle na\u00efve, its intended to make a point: Data protection should be simple to explain, easy to implement, and as strong as you can make it.