Veronica Schmitt started to wear an implantable cardiac device when she was 19. A few years ago, although the small defibrillator appeared to be working properly, she felt sick. "I kept passing out, and I went to a hospital, and once they had to resuscitate me," she says. "That was not supposed to happen."Her doctor pulled out the data the device was logging and said that everything was alright. She shouldn't worry\u2014maybe it's just stress.Schmitt, who is now in her early 30s, has always been passionate about technology, so she didn't buy into this. Instead, she looked at her other device that logged health data, her smartwatch, pulling out XMLs and doing data analysis. She proved that the two gadgets showed contradictory information and asked her doctor to prescribe her additional medical tests.Those tests proved that she was indeed sick, and that her implantable cardiac device was malfunctioning. Schmitt went into surgery to extract the device and get a new one, which changed her life. "My hands were warm; my cheeks were red. I wasn't gray in the face anymore," she says. "If I didn't know how to look at logs and data, I'd probably be dead."After she recovered, Schmitt became obsessed with logs. She analyzed different devices and tried to understand how she could improve log keeping. She is now the leading voice of a movement that aims to help everyone build better logs focused not only on performance but also on security. "We don't do monitoring the way we should," she says.Building better logsSchmitt took inspiration from two books she read by Gene Kim, The Unicorn Project, and The Phoenix Project. She realized that poorly designed logs are "a byproduct of how dysfunctional organizations are in terms of security, development, and operations or having silos."Since large entities move slowly and are reluctant to change, Schmitt focuses on developers, trying to influence how they work. "I'm trying to speak in the language that makes developers excited, but also makes logs cool, \u2018cause logs suck," she says.Most developers admitted that they were not trained in designing logs. They simply recorded information that was relevant to them, focusing on performance. Few thought about security and logged data that would be needed in the event of a breach.To help them, Schmitt designed a benchmark spreadsheet that took inspiration from the NIST security standards. Developers can take their application logs and score them to see if they are doing a good job when it comes to integrity, performance, and security. Better logs make it easier to distinguish between critical data and noise, and if they update theirs according to the recommendations, they will be more prepared when dealing with a security incident.Schmitt also created a list of five philosophies for designing logs.1. Logs should be simple, structured, and detailed enoughFirst, logs should be simple and should retain the minimum amount of data that does the job, Schmitt says. Anyone briefly looking over them should be able to understand what they contain. "The logs should not be seen as a cache of information," she wrote on her blog. They should rather be seen as "a source of information that is simplified to only contain that which is necessary."She also calls for consistency when designing logs. Some developers, for instance, prefer to use local time when logging date and timestamps, while others go with UTC. This can break a forensic researcher's timeline. "The larger the team, often, the more disconnected the logs," she says.To address this issue, organizations can plan the structure and the format of the logs. They can start by asking a couple of questions: "Are these logs going to be used for enrichment purposes within a SIEM solution?" Or, "What is the purpose of the events you choose to monitor? Are they more related to debugging, error handling, security events, or future forensic incidents?"Asking these kinds of questions is relevant not just to developers, but also to companies that want to catch potential threats, says Nick Carstensen, product manager for security and integrations at log management solution Graylog. "Our key philosophy is to know what you are trying to accomplish and ensure you are collecting the logs to meet your goal," he says.2. Create metadataSome data developers work with can be sensitive and should not be logged. "There are many things to consider, including whether you should have the information at all or perhaps simply reconsidering how you print your log statements to deal with these types of data," Schmitt wrote.One way to make the process more streamlined is to tag data as public or private, having specific definitions within the organization of what these words mean. "When you know a variable contains potentially sensitive user information, mark it as secret explicitly," Schmitt wrote. "Building in the controls required to identify what type of information your variables may contain gives you the power to set the rules about when they are, or can be, disclosed."When logs are stored on a device that\u2019s outside the organization's control, they should only include public information. If they have sensitive information, the organization might face serious consequences in the event of a breach.3. Keep logs clean and focusedLogs are mostly analyzed when things go wrong. The rest of the time, they tend to be ignored. The volume of store information expands, and sometimes minor design flaws propagate. Logs "grow with the application," Schmitt says. "[Y]ou will accumulate useless logs or logging debt."When logs include too much worthless data, they don't have a lot of value for researchers. Schmitt suggests looking at logs as applications grow. Developers who aim to produce clean code should also want to have clean logs, she says. She recommends testing logs regularly using a benchmark to prevent them from getting too bulky.4. Prepare for being breachedAlmost every application or organization will be compromised at some point and it should log accordingly, trying to help future investigative teams analyze those incidents. Schmitt examined many logs during the past few years and found that they often include information with little value, such as uneventful status checks or system checks, which clutter the relevant data. She tells developers to avoid logging normal behavior and instead focus on changes and exceptions. "You should be far more concerned with logging when things go wrong," she wrote.Logs should also focus on vulnerable areas. If, for instance, an application could potentially suffer injection attacks, developers should build extra logging controls to detect those faster.\u00a0Companies, too, should think ahead and plan for the worst-case scenario. "Getting the logs off the system in real-time will allow for the reconstruction of what happened in the breach and the extent of spread after an initial attack," Carstensen says. "Incident responders will start at the known data point of a breach (IP, host, file name) and then try to understand what happened prior to it."Good logs help investigators see if the malicious file was downloaded via the web or spread from another host on the network. Then, they can search in the past to see if there were similar issues.In the event of an attack, the worst thing that could happen is to discover that crucial information is missing. "Not having the logs required to uncover how they did it is frustrating," said Grant Ongers, co-founder of application security consulting company Secure Delivery, who works with Schmitt. "When digital forensics are asked to look into a potential breach, if there are no logs that focus on the security events that may have occurred, then there are no answers to give the CISO," he says. "And the CISO has no answers to give to the board or the relevant data protection or regulatory authority."According to Ongers, there's even something worse than that: "If you have no security-related logs, or the ones you have are unreliable or otherwise unusable, then even discovering that a potential breach happened is impossible," he says.5. Store logs for secure accessDesigning good logs is one thing. Storing and securely accessing them is another. While investigating breaches, Schmitt learned that there is often "an unreasonable amount of trust" people put in the technology they use. Her advice is to "trust no device, no system, and no method of transmission."Often, if the device that stores logs is a user's mobile device or laptop, the organization that developed the app has little control, and it\u2019s best if it plays it safe. "There should never be any information in the logs that can be used to derive additional information about how the application functions, authenticates, or endpoints it communicates with," she says.Logs should contain just enough information for the debugging process to work and should not include the elements that could be considered sensitive, because they might fall in the wrong hands. "Many breaches occur because we assign a high level of trust to internal services and members of the organizations," she wrote. "Many breaches occur from within, not necessarily from outside. Logs contain valuable information that any attacker might want to have access to."Carstensen agrees that organizations need to be wise when deciding who can access the logs and how. They should limit access to a minimal number of people and take measures to prevent log manipulation. Specifically, he recommends "removing the ability to delete logs unless approved by two separate people." He also pointed out that companies should meet all the compliance regulations that apply to them. In addition, he advocates for encrypting archived logs because they might have sensitive data.Why we need to pay more attention to logsThere\u2019s an old saying that governs forensics, the Locard's exchange principle: When a criminal operates somewhere, they will do two things: bring something into the crime scene and take something from it. Both should be seen in logs and should be used as evidence.This is why keeping good logging should be a part of any organization's security strategy, Schmitt says. Ongers seconds that, saying that often developers are a key part of the solution. "Security needs to be built in by design, during development," he says.Schmitt plans to continue teaching computer experts to see logs from the incident responder's perspective, telling them to log fewer things, but to make the process more efficient. "The biggest thing is just simplifying logs," she says. "It's taking these complex amounts of information and reducing them."