How can companies move fast, securely? In this episode, Yadin sits down with Laura Bell Main, CEO of SafeStack and co-author of the book Security for Everyone, to discuss securing organizations of all sizes, navigating security conversations when selling to large enterprises, and some common missteps with security and AI.
How can companies move fast, securely? In this episode, Yadin sits down with Laura Bell Main, CEO of SafeStack and co-author of the book Security for Everyone, to discuss securing organizations of all sizes, navigating security conversations when selling to large enterprises, and some common missteps with security and AI. Laura dives into practical tips for both growing a company and prioritizing security.
---------
“We need to remember that security is the oldest problem there is, so it isn't something you grow into. It isn't something that suddenly appears one Tuesday morning and you're like, oh, right, security matters now.”
“This is how your company is seen in the world. It's a bit crass to say security can help you sell, but it really kind of can.”
“We're never going to win. We just have to give up on that entirely. There is no 100 percent secure, there's no done. That's okay, that's cool.”
---------
Timestamps:
(01:16) How smaller organizations can become secure
(05:21) Security can help you sell
(06:05) Communicating security measures when pitching
(08:54) Security and AI tools
(11:01) Putting data in LLMs
(17:24) AI explainability requirements in the EU
(26:37) Understanding ROI in cybersecurity
(32:40) The role of dopamine in cybersecurity
--------
CIO Exchange on X
Yadin Porter de León on X
OWASP Top 10 for Large Language Model Applications
[Subscribe to the Podcast]
On Apple Podcast
For more podcasts, video and in-depth research go to https://www.vmware.com/cio
0:00:00.0 Laura Bell Main: It's a weird thing being a security person who specializes in fast moving companies because you're constantly at war with yourself. Half of you is super conservative and anxious about everything in the world, and half of you is like, "Yeah, let's go build stuff." And those two things very rarely meet nicely.
0:00:18.8 Yadin Porter De León: Welcome to the CIO exchange podcast where we talk about what's working, what's not, and what's next. I'm Yadin Porter de León. How can companies move fast and securely regardless of their size, resources, and pace of growth? In this episode, I sit down with Laura Bell Main, CEO of SafeStack and co-author of the book, Security for Everyone, to discuss securing organizations of all sizes, navigating security conversations when selling to large enterprises and some common missteps with security and AI. Laura dives into particular tips for both growing a company and prioritizing security. So, Laura, all organizations need to be secure regardless of their size, although some of the smaller or newer organizations may struggle to make themselves secure because they've got lack of resources, bandwidth expertise, maybe their vision doesn't align with the most secure approach 'cause they're trying to move quickly.
0:01:12.0 Yadin Porter De León: You and Erica Anderson co-wrote an online book called Security for Everyone, which dives into how smaller organizations can make themselves more secure. So what are those things that those small organizations that are trying to move fast and break things maybe, or maybe they're trying not to break things so much, how should they prioritize the way they can approach that moving quickly in enhanced way that, with their security in mind so that they're not just moving facts and breaking things and leaking your data and crashing the internet?
0:01:42.7 Laura Bell Main: So much in that question, but let's unpack it a little bit. Now, I'm also going to caveat a little bit because there's going to be folks who are listening who on paper or by the size of their brand, maybe they've got the perception of being at a huge organization. Like, you see that brand and you go, "Oh yeah, that's a huge company." And actually outside of our teeny tiny nursery companies who are just starting on their wild adventure, we also have some companies who aren't huge, even though we feel they might be. So if you are listening and you're not at your early stage but some of these apply, don't worry, we don't judge. So...
0:02:16.0 Yadin Porter De León: Yeah. So no judgment on the show. So special.
0:02:17.0 Laura Bell Main: No, no judgment. None at all. So let's start at the beginning. So first things first, we need to remember that security is the oldest problem there is. So it isn't something you grow into, it isn't something that suddenly appears one Tuesday morning and you're like, "Oh, right, security matters now."
0:02:35.3 Yadin Porter De León: Yeah, like starting, because you're in New Zealand, so starting in the ancient times, you had to keep the wolves out of the gates so the sheep would be secure. So it's a really old, it's an old problem.
0:02:46.4 Laura Bell Main: Well, so sadly the sheep thing, we now have more cows than sheep and it just doesn't make as many good quips in a podcast.
0:02:52.5 Yadin Porter De León: Oh, I thought there were more sheep than people.
0:02:55.0 Laura Bell Main: No, more cows than that now.
0:02:56.0 Yadin Porter De León: Oh. Oh, wow. Things have changed.
0:02:57.9 Laura Bell Main: Yeah, I know. Dark times, dark times. But if we look at security, even in the New Zealand context, bizarrely, in early peoples, every early peoples have something of value, right? We have a method of cooking called hangi, which is cooking using very hot stones. It creates a steam oven in the ground. It's beautiful food if you ever get the chance to do it. Now, having the right stones that are made of the right material, actually, changes your cooking process and how effective you are feeding your entire family. Now, for as long as there have been people, there have been people who've wanted something they don't have, especially if it makes their life better. Now, bringing this back from, maybe you cover cooking stones or maybe in modern times it could be something your organization has. Now, one of our biggest mistakes in young companies is we're like, "Ah, nobody knows we exist yet, so there's nothing of value here."
0:03:47.3 Laura Bell Main: Oh, but there is. So I'll share a personal story because why not? When my little company, my company is 10 years old now, when it first first started, I was doing that thing where you do a conference talk and you forget to write the tool that you're going to talk about until like two months before. So I was doing a lot of coding late at night, and I accidentally put a repo public that had an AWS key in it. Now, nobody in the whole world knew my company existed. It was a tiny company, two customers. But overnight, a little robot run by some enterprise and criminals spotted my AWS key, it span off some resources, and by the time I woke up the next morning, I had spent $4,000 mining Bitcoins in Sao Paolo.
0:04:29.7 Yadin Porter De León: Oh, wow. And you didn't intend to do that, did you, Laura?
0:04:32.4 Laura Bell Main: No.
0:04:33.2 Yadin Porter De León: You didn't give any Bitcoin on that deal, did you?
0:04:35.5 Laura Bell Main: I wasn't scheduled to do that till the Friday. So it was affecting my plans quite spectacularly. So even when you are a little company, it may not be the things that you think have value that somebody attacks you for, our attackers are motivation driven. So young companies are great targets because you have all of this fancy cloud infrastructure that perhaps you haven't quite got the team yet to monitor all of it, you haven't set that part up. You've got data that you are collecting and probably at quite a fast speed because you're trying to go fast and trying to solve a lot of problems. And so all of these things can have value. And so if we wait till too late, actually, we're going to have some big problems early on that A, we don't spot for a very, very long time and B, can have a huge impact.
0:05:18.3 Laura Bell Main: The second side of this is how your company is seen in the world. Now, it's a bit crass to say security can help you sell, but it really can. If you are a tiny little five-person company, for most tiny five-person companies, their dream is to sell to the giants of the world. So there's going to be listeners on this podcast who what, they are the giant that you want to be selling to. Now, it's very difficult to do that if it's absolute chaos in what you're doing. So adding some security helps you communicate with these larger organizations during that process and say, "Hey, the risk we pose you is minimal and therefore you can take the chance and work with an earlier stage organization.
0:06:01.8 Yadin Porter De León: And I want to underscore that too. And what's the best way for them to communicate that? Is that with regulatory frameworks? I mean, some of the requirements like you open a financial organization, you have to have certain types of regulatory things in place. Or are there other really great ways to say, look at we're secure, not just a PowerPoint presentation?
0:06:17.0 Laura Bell Main: Oh yeah, absolutely. PowerPoint is not going to fly team, I'm going to say it now. Remember the love language, the security space is the spreadsheet and that's how we communicate our ceilings. Now, there are a number of ways to go about it. You could start with just a checklist though. Every single time you try and sell to a customer, they're going to send you a checklist and it is going to be the mother of all spreadsheets. Sometimes it's a spreadsheet dressed up as a glossy tool that they've paid money for, but it's still a spreadsheet underneath. I was going to ask you a whole bunch of questions. Now, a lot of these questionnaires are actually open source. So GitLab, for example, share publicly their questionnaire that they send out to people, so do Google. Now, that can be a really good primer so that you can grab those in advance and start writing what your answers would be. Now, further down the line, you might choose to go after a recognized framework. Now, often people start with SOC 2 and then they move on to ISO 27000 because it's most globally recognized. Those are more just formalizations of the same kind of questions you're going to see in those questionnaires at the start.
0:07:19.4 Yadin Porter De León: This is a third party validation of the work that you did to make sure that you actually did what you said you did.
0:07:23.8 Laura Bell Main: Absolutely. And in that case, you get to make friends with an auditor, which is always fun.
0:07:28.5 Yadin Porter De León: Those are always, and auditors and spreadsheets, they make really good Zoom calls.
0:07:31.9 Laura Bell Main: Oh, I know, right? This is the best podcast ever. Like, next up we're going to talk tax. No it's, there's a lot of platforms you can get to help you with the admin of all of this. But what I would just caution folks to do is, in the early stage of your company, we're all fundamentally lazy in the early stage and we have to be because we've got a lot to get done. Remember, the tool isn't going to do the job for you. The tool is just there to help you organize your thoughts and communicating in a consistent way. So you still got to have some time around the edges to do the work needed. They can just help you focus it down a bit.
0:08:05.1 Yadin Porter De León: And so AI is going to solve this problem for us, right? So we can just send a bot and, the security bot and it goes and does all the security stuff and comes back and says, "Your environment is secure." And you say, "Great." Then you go into the large enterprise meeting with, "The bot says it's good."
0:08:20.0 Laura Bell Main: Uh-huh. Absolutely.
0:08:20.0 Yadin Porter De León: I couldn't waste any time before I inserted that in there because I know artificial intelligence is becoming one of the big hot button topics with regards, especially, to security and data security. And how secure is all the different things that I'm doing now. And now since you're talking about some of the younger companies or even teams within larger companies that are trying to move quickly and do some of the things that you're talking about, everyone is being pressed to make sure that artificial intelligence or GenAI or LLMs are infused in what they're doing too. And so let's touch on that real quick since we are talking about that trajectory and speed of organization. What are some of the common mistakes or maybe even just misinterpretations of the current situation of the technology that these leaders are making? And what are some of those leadership decisions that may get them into trouble if they are going in the wrong direction?
0:09:04.4 Laura Bell Main: All right, listen carefully folks, I'm going to give you a whole bunch of things to process here. You're going to want to come back to them.
0:09:10.1 Yadin Porter De León: I've got my spreadsheet ready?
0:09:11.5 Laura Bell Main: Yeah. Fabulous. Good. Go get the red pen. We want the important red pen today.
0:09:15.5 Yadin Porter De León: I got it.
0:09:16.3 Laura Bell Main: Right. So let's start at the top. There is some good free open source guidance that you can go and find. So OWASP, which is a big organization, the Open Web Application Security Project, poorly named but really, really important organization, they have produced the OWASP top 10 for LLMs. Now, this is the top 10 security problems that happen with LLMs. So if you are in that space, it is not just written for nerds, I promise you there is a high level version with each of the vulnerabilities, but that has been written by some really super smart people from all over the world who are talking about the challenges. So even after I've finished speaking today, you should go and look at that document and we can share a link and make sure that you've, at least familiar with some of the language that's coming through here. Now, when you are approaching this for yourself, you've got a few choices when you're rolling out an AI-based product.
0:10:05.5 Laura Bell Main: Either you're building the model yourself and you're hosting it internally, or you are plugging into a bigger public facing one that somebody else has built. So think of this as like your OpenAI kind of thing where you pay for API access and you use somebody else's model. Now, each of those have different challenges. If you're using somebody else's model, remember it's somebody else's model, which gives you a few bits of challenges you need to think about. What happens if the model goes away or if they change their billing to be so expensive that it will bankrupt your company? You need to have a plan for long-term, either replacing it or understanding the financial impact of using that. Now, you'll notice I'm a security nerd leading with finance because controversial opinion, in this market the biggest thing that's going to kill your company is finance before it's security. So let's get the money side straight.
0:10:58.4 Yadin Porter De León: I like that.
0:11:00.9 Laura Bell Main: Next up we have, what you do with your data. So if you're putting it into somebody else's model, that model is, it's hungry for data. Every time it learns something, it's looking at the data you give it and it's evolving its own model of the world, which is great, but that means that your data is forever in that pot. So whatever you are feeding into it, you need to be really mindful that that is going to come out in some statistical form somewhere else. So don't show any customer data, don't show anything that's sensitive or IP, don't show anything you are likely to want to trademark, copyright or patent later. But these are things that you should not be putting in public models.
0:11:37.3 Yadin Porter De León: 'cause it is also really, really hard to get that out.
0:11:40.2 Laura Bell Main: Oh, there's no getting it out.
0:11:41.8 Yadin Porter De León: And you can't delete it.
0:11:43.1 Laura Bell Main: No.
0:11:44.1 Yadin Porter De León: So you'll run into things like, "Okay, GDPR, I have a bunch of personal identifiable information for a citizen of the EU and how do I get it out?" Well, you don't.
0:11:52.1 Laura Bell Main: You don't. The bigger question is, why was it there to begin with? So you need to assume that that is an untrusted space. So it's a really useful tool, they're great for productivity, there's lots of wonderful things you can do there, but you don't want to be putting anything there that is sensitive, commercially valuable, or that you would need to remove later because you're not going to have that kind of control, especially, if it's somebody else's model.
0:12:17.4 Yadin Porter De León: Yeah. And I think that on that point, really, I'm really interested in how some of that stuff does get in there because there isn't potentially the guidance within an organization for people to understand, even from the engineering level, how they need to be treating different types of data when they're coding for these large language models too. And maybe you can give from a leadership perspective, how do guidelines, responsibility frameworks really play into this? Because if you want to build that trust security, you then have to build that, it always comes back 'cause you do owe it to people, you have to make sure that they understand how to do things correctly. What are you seeing in the market right now, and how are you, how are people finding success by creating those guidelines and making sure that people aren't doing the wrong thing?
0:13:04.7 Laura Bell Main: It's really challenging. So if you are one of those organizations and you're listening and then you're like, "This is hard." Yes, it absolutely is. It's not just you. I don't think we're doing this very well, but I don't think we were doing this very well before we had LLMs. I think we've just exacerbated a problem we already had. And I want to just take it back a level and explain what that problem is. We are used to historically our systems being these big closed boxes, and I'm talking about 20, 25 years ago where all of the control was at the border.
0:13:33.4 Yadin Porter De León: The good old days.
0:13:34.4 Laura Bell Main: The good old days, exactly. So we had this big border, we had our web application firewalls, we had our network level firewalls, and it was basically everything inside the box was nice and soft and protected and everything at the outside was the wild west and we protected against it. But what we've done over the last 20 years of software is we've decomposed our systems. We have distributed them into smaller components, into multiple technologies, multiple languages and frameworks, they're hosted in different platforms. Now, one of the things that causes the most vulnerabilities in our software is the fact that if we're inside a network, we have an implicit level of trust in many systems. Oh, you've come from an internal system, therefore I can trust you. But internal system doesn't mean the same as it used to. It hasn't come from the guys sitting next to you on a desk, it's come from, could be a completely different organization. It could come from a different technology entirely.
0:14:29.4 Yadin Porter De León: Yeah. What's federated now is so much broader...
0:14:33.0 Laura Bell Main: So much.
0:14:33.2 Yadin Porter De León: With the cloud and the systems and API and all the different things. Yeah, internal doesn't, it doesn't mean what it used to be back when you just plugged into the mainframe.
0:14:40.8 Laura Bell Main: Exactly. And if you've wrapped in an AI engine and it looks just like one of your other microservices or your other components inside your software architecture, then it can be very easy to treat all of those endpoints as the same because you don't need to know how the sausage is made underneath that front-end. You just, "I interact with that service and it does my job for me." So we've got to stop trusting things we built ourselves. So now, marketing has horribly co-opted the phrase zero trust into this massive monster of a hype cycle. But...
0:15:13.4 Yadin Porter De León: I love that. I think you can apply it to the marketing material itself. So I have zero trust of the Zero Trust marketing itself.
0:15:19.6 Laura Bell Main: Exactly. Yeah, we've come full circle now. So if you take it back to the core essence of what Zero Trust was about, it was saying that, anything you receive or any contact you have with any other system, whether you built it or not, you just have to assume that it could go wrong and that you have to verify and that you have to still authenticate and authorize and you have to do that validation piece. We don't uniformly do that in our systems, and as a result, we end up with these pockets where there's implicit trust where there shouldn't be. Now, if you've wrapped a big third party LLM in an API endpoint and there's people in your team who aren't aware that that data is actually going well outside your organization, well outside your control, it's really difficult to make those decisions on the fly. So we have to adopt this approach of everything is untrusted, that's okay, it's not adversarial, it's about just keeping us safe as we go. And by doing that, we're going to get a little bit better in this LLM space and whatever comes next.
0:16:17.3 Yadin Porter De León: No, I think that's a good framework.
0:16:19.4 Laura Bell Main: Yeah. The other side of this is there's some interesting law coming through in Europe and I think we should all be watching this space very closely, and that's the explainability requirements for AI. So the thing that we forget with LLMs particularly, is that they are really bad at saying, "I don't know."
0:16:37.2 Yadin Porter De León: Oh, they are, Mark Anderson describes it as a puppy. It just want to makes you happy and it'll do whatever you tell it to do. "You want an answer? I'll go find an answer." Yeah. Tell me a story about how Taylor Swift made handmade turtle bowls in South America and it'll come back with a story. Yeah, this is the story and here are the dates and here's when she did it.
0:16:53.2 Laura Bell Main: Exactly. Now, that's great, by the way, for parents out there, I use this with my children who are at two weird ages. So we've got Harry Potter and Paw Patrol in the same thing with Pokemon somewhere in the middle. And so you can create ultimate fan fiction crossover.
0:17:07.7 Yadin Porter De León: Ah, fabulous.
0:17:08.5 Laura Bell Main: Parenting hacks.
0:17:10.0 Yadin Porter De León: Is there implicit trust within that network of Pokemon and Harry Potter?
0:17:14.3 Laura Bell Main: You never trust Pokemon, never ever trust Pokemon. They're too smiley, they're too happy.
0:17:18.6 Yadin Porter De León: They are, they are.
0:17:20.1 Laura Bell Main: So moving on with the explainability in the EU, what they're saying is, because we need to assume that an AI engine is never going to say, "I don't know." or be completely, "I'm not going to answer that question," unless it's been explicitly told to trigger on certain keywords, there's a danger in it. So in the EU they're aware that some of these systems, historically, already have been used in things like visa discussions or when to pursue somebody for crime. Now, in those cases, it's incredibly important that if a decision was made that impacts somebody's health and safety, like their physical freedom in the world or their ability to breathe, then we want to be able to say, "Okay, this decision was made because of X, Y, and Z," and trace through that decision. So there is law passing through that is going to make, require software makers to provide an explanation when an AI decision has been made such that later it can be audited and reviewed. So I think that's an interesting direction this is going to go. And I think it's really important that we have that level of transparency because there is nothing worse than trusting a system that, like you said, it's that little eager to please puppy. We, actually, it's really healthy for us for some things to say, "No, I can't do that." Or, "No, I don't know." For us to have built systems that can't do that for us is a bit of a flaw in the process at the moment.
0:18:41.8 Yadin Porter De León: It is a little bit. And so, I think, you talked about systems and explainability with LLMs and trust within your network too. I wanted to flip that, 'cause you touch on a little bit. I wanted to flip over to the other side and that is the data and when you're talking about security and data and should leaders be thinking about, look, do I need sort of the security policy portability? I think that's a larger theme of, if data goes from here to there, does it have the portability of, "Okay, I need to make sure that my security around that data is portable no matter where it's going." It retains that context so that you can understand the context, of course, and security is extremely important. It retains that context throughout the thing. And especially when you're talking about LLMs and other systems, outside systems, APIs, cloud pieces, how important should that be when leaders are looking at security, is that data portability for those security contexts?
0:19:38.1 Laura Bell Main: I think it's challenging. There's a purist in me, there's, from a yay, everything is going to be awesome, utopian dream of security, it will be wonderful.
0:19:47.7 Yadin Porter De León: We'll lock it down and everything will be secure.
0:19:49.9 Laura Bell Main: Or just I'm going to communicate very authentically, here's what I need you to do with my data and I'm going to give it to you and you're going to do everything I've asked you to do and more and you're going to be really transparent while you're doing it. But the reality is that we have no frameworks globally at the moment that require us to do that. And we have the conflicting incentives, if you will, between what security is trying to achieve and what the businesses are trying to achieve. So in security, we're trying to reduce risk, minimize harm, and in our business, we're trying to grow, we're trying to innovate, we're trying to whatever your company is measured on. If it's a young company, that's probably going to be your revenue or number of customers. In a large one, it's going to be market share or share price, all those kinds of things. And sometimes those work really well together. So by keeping risk low, I can grow more. But, actually, it's not always the case. In fact, we've seen cases where there's been breaches where the share price has gone up after the breach because the marketing teams are so good at what they do that they've used it as a wonderful opportunity to capitalize on the press that they've just got.
0:20:55.7 Yadin Porter De León: Yeah, look how great we handled this breach.
0:20:57.9 Laura Bell Main: Yeah, exactly. Which is a very strange space to be in. We have to keep pushing for us, our needs and our requirements to be met. But I don't think until the global legislation catches up with it, or that it's actually universally applied, we're going to see it. So for example, I'm conducting a big survey at the moment of companies all around the world, big and small, and we're collecting just seven pieces of data. So what industry they're in, size of the organization, number of software developers, number of people in security, number of people in application security, specifically. And we've been trying to cross-reference that data spread with breaches that we're seeing. So what we've got going on at different size organizations, different industries. But you can't do that because our breach laws, the ones that say that actually there's a consequence if a bad thing happened, only apply to certain sizes of organization, primarily big ones, primarily publicly traded ones. And so there is no data for us to have a true picture of the impact of security.
0:22:00.6 Yadin Porter De León: It's kinda surprising. It seems like it should be, even if it's anonymized, it seems like that should be collected at some level. Why doesn't that exist?
0:22:08.7 Laura Bell Main: Because our organizations have no obligation to tell anyone something bad happened.
0:22:13.9 Yadin Porter De León: Yeah. Why would they want you to know? And which makes perfect sense.
0:22:17.4 Laura Bell Main: Yeah. It's a weird thing being a security person who specializes in fast-moving companies, because you're constantly at war with yourself. Half of you is super conservative and anxious about everything in the world, and half of you is like, "Yeah, let's go build stuff."
0:22:32.5 Yadin Porter De León: Yeah.
0:22:33.4 Laura Bell Main: And those two things very rarely meet nicely.
0:22:36.4 Yadin Porter De León: Yeah. But I think those two sides, I think, are really what's needed. And that's the dance that a lot of organizations are having that are either small organizations or teams within larger organizations. They're trying to say, "How do I do that dance?" We want to go build things, but at the same time, we want to move securely, we want to make sure that we're playing by the rules and how do we ride that line? And we're constantly listening to things, like you, Laura, and the way your perspective is so that as a leader I can start to help my teams walk that very thin line and do great things and innovate and create beautiful stuff. But at the same time, not have the downside risk that keeps us up at night.
0:23:11.8 Laura Bell Main: Yeah. I think there's a really important bit I'd love to dig into if we could, and it's about uncertainty. And it might be something that folks can have a think about afterwards. So when you're going fast, whether you're an internal team or a smaller organization, you are actually really enjoying the uncertainty. It's chaos and beautiful chaos and wonderful things are happening. And so we talk a lot about the uncertainty of early stage and how that that's a superpower and using your data and all those kinds of things, experimentation. Now, security is entirely about uncertainty. If you were to build an application, let's say, fictitiously, say a doggy dating app, where you can find the perfect friend for your little eager puppy.
0:23:51.9 Yadin Porter De León: This is great.
0:23:52.7 Laura Bell Main: On a certainty level, as a business model, we can go, "Cool, I want to charge subscriptions and I want to get a share of puppies in places with expensive puppy communities." And you can make some choices in a very uncertain space. When you get to the security of that system, however, the wheels completely fall off because this isn't a case of, "Hey, I'm going to hack your system and steal money." I'd love as a thought exercise to leave it with your audience of, who might want to attack a puppy dating app and why would they want to do it? And there's at least 100 answers, so be creative. But what you start to realize is that as engineers, as early stage people, what we want is to as quickly as possible find a path from A to B. So to reduce uncertainty down to a line and then tread that line while monitoring very quickly and iterating as we go. In security, we can't do that. In security, we have many different ways an application can be harmed or misused or used in unexpected ways.
0:24:55.5 Laura Bell Main: And we can't just say, "Oh, well, I'm just going to ignore and park all of those and just focus on one." We have to juggle the uncertainty of not knowing which one is going to happen to us and having to plan routes that help us control, so either prevent, so stop bad things happening, detect, spot it happening, or respond to it for all of those. And those clash horribly. That desire to turn uncertain to certain combined with the breadth and creative side of security really hurts our engineers. Our engineers find this very, very difficult. It's just not in their natural space. So if you're going to try and embrace security, one of the most important things you can do in your team is not buy any tools, it's not adopting the right framework. The fundamental underneath all of it is make them okay with not knowing, with uncertainty.
0:25:44.3 Yadin Porter De León: I love that framework, that making them okay with uncertainty because there are certain people that are like, though the whole, who moved my cheese framework, people don't like change. And that's in the same bucket as uncertainty, which is things might change, things aren't changing now, but they might change. So that uncertainty of not knowing whether something is going to change is within that same framework and getting people to be okay with uncertainty. But then having, like you said, an approach, a methodology to be able to engage with that uncertainty and make decisions like the business decisions, or like you said, what tooling, what approaches, what framework we're going to use. I love that approach. And I want to use that as a springboard to go into a last segment, which I call take it to the board, which is, okay, you've got a board of directors, or a small company, big company, and you need to be able to say to your point earlier, what's the finance piece of this? All this stuff costs money.
0:26:35.4 Yadin Porter De León: And everyone listening to this knows that all this stuff is not cheap, it's expensive. And every incremental dollar, you want to understand what the efficacy of spending that incremental dollar is. And so, in a recent article you wrote, Forbes, you talk about how tech leaders will very often get questions about whether investments in cybersecurity are having the impact. And that ROI is really important. So this is the CEO, this is the CFO, who sometimes the CIO reports to the CFO. And in smaller organizations, there's just so much transparency about how much money you're spending, about what you're spending it on. And it's very, very precious, especially, if you've got a short run rate or a large overhead and you're like, "Okay, we want to do this, but we want it to be secure." So how do you first articulate in that conversation, let's say to their board or their investors, what's that return on investment for those incremental dollars that are spent on security? And how do you convince them that you're spending it the right way?
0:27:24.8 Laura Bell Main: Yeah, such a great question. And one that every size organization struggles with. I'm going to have some slightly different opinions to more traditional views. So...
0:27:33.8 Yadin Porter De León: That's why we're talking, Laura.
0:27:34.9 Laura Bell Main: Sorry, no, sorry. Okay. So I'm going to start with my hard line in the sand here, and that's security tools are not magic boxes. They are not solving a problem that cannot be solved by a clever engineer, they are efficiency tools. So for me, I treat them like the same way that I would judge a productivity tool, not a technical solution that solves a problem that only sacred mystics can do. Now, that changes the framing on the conversation. So it's not about the fact that the only way to do this is by investing in X, Y, or Z, but saying, if we don't do this and we accept that we still need to resolve this risk or reduce it, then what's it going to cost us in terms of people and time to do this by hand? And what is the degree of uncertainty and failure that might happen if our people are busy or distracted or have too much on, which all of us have way too much on at the moment.
0:28:28.1 Yadin Porter De León: Yes.
0:28:28.2 Laura Bell Main: I don't think there's a single one of us who has come to the end of this year and gone, "Hey, yeah, I had a chill year. Let's just do a bit more next year."
0:28:35.6 Yadin Porter De León: Exactly. I patched every system that was on the schedule this year and it's done.
0:28:40.3 Laura Bell Main: Said no one ever. Said nobody ever. So we reframe it. So firstly, it's not a magic box. There are always multiple tools that can do the same job. What you're trying to achieve is how much is this going to save me in terms of time and turn into actionable change? So you don't want a device in your system that just every week spews out like a giant PDF of love, 67,000 false positives that you now need a team to look through. That's no good to anyone. You need something that is giving you just the right information at the right time in a way that is actionable. And that actionability needs to be hooked into existing tools in your workflow. So in the engineering space where I live, that means getting it to where the people who fix software live.
0:29:24.9 Laura Bell Main: So if it's living outside of that, it's in a completely separate part of your organization or it's never going to work, it needs to be as close as possible to the people who can fix it. Now, when you're looking at the return on investment, we try and look at it in a different way. So, historically, we like saying things like, "Well, our pen test report had 10 findings this month and it had 11 last month, so we must be doing better." Or, "We only had three incidents this month, so therefore it's good, 'cause we had five last month." It's a lagging indicator. It's a view of the past. It doesn't tell us if people are even trying to attack us or has the world changed around us? They're really poor indicators, but they give us a sense of kind of comfort. So that's why we hang on to them.
0:30:08.6 Yadin Porter De León: That false sense of comfort where there's like a warm blanket and you wrap it around yourself and it's not actually keeping you warm, but it feels nice and fuzzy.
0:30:16.0 Laura Bell Main: Absolutely. And I don't judge, it's the holidays, it's what we do. But when we're talking about security, we want to be a bit better than that because we want to measure two things. We want to measure, are we doing everything we can to identify the risks we currently face? And there's a lot of words in there to unpack, like currently face, not based on the past, but actually active things now. Everything we can with the resources we have.
0:30:40.0 Yadin Porter De León: So there's like a boundedly rational limit to that. Not everything that we can...
0:30:44.0 Laura Bell Main: Exactly.
0:30:44.6 Yadin Porter De León: But everything can with the resources that we have.
0:30:46.1 Laura Bell Main: Yeah. And with our understanding of where our risks lie and that's where having a threat model, having a risk assessment of the ways and the likelihood that somebody may do your organization harm can be really helpful. And then on the other side, you want to be saying, not only are we finding these things, but we're doing something with them. So are we taking steps to prevent, detect and respond to these issues in a way that if you looked at the two columns in balance, you're starting to see a trend downwards in terms of, there are less things on your backlog of things you have yet to get to. So what we want to see, is we're never going to win, we just have to give up on that entirely. There is no 100% secure, there's no done. That's okay. That's cool.
0:31:28.1 Yadin Porter De León: Wait, do you mean this is a constantly evolving process that you actually have to not just set and forget and you constantly have to review your processes and policies and systems to make sure that they're evolving to meet the evolving threats? Ah, security. Security is hard, Laura. Security is really hard.
0:31:41.1 Laura Bell Main: It is not only that, but we also have to keep learning because all of that automation, that cool engineering stuff we do in our products to build those new innovative systems. Well, our attackers are just software engineers with a different set of moral code. So they're using all the same cool technologies and building the same innovative AI-based systems we are, just they do it for crime. So, yeah, sadly we never get to rest.
0:32:04.8 Yadin Porter De León: Oh, that's too bad.
0:32:05.8 Laura Bell Main: What we can look for is seeing that the balance is in check. We are finding stuff and that's great, and we are fixing stuff and that's good too, and the relationship of those two things is met. Your organization can't hope for a better measure than some sign that has an active and continual culture of identifying and addressing problems.
0:32:27.1 Yadin Porter De León: That's fabulous. And I know it comes down to people, ultimately. Any other final thoughts that you have, Laura? Something you want the listeners to take away. If they could take away one thing, what do you think that would be?
0:32:37.6 Laura Bell Main: Yeah, I'm going to talk a little bit about dopamine and brains.
0:32:40.6 Yadin Porter De León: Excellent. That's a good approach.
0:32:42.9 Laura Bell Main: We like cybersecurity 'cause it's sexy and it feels like we're talking about a Hollywood movie and that somebody is going to come and hack us wearing gloves and a hood or whatever. And it's going to be very glamorous. But in reality, 80% of the hacks we see in systems are boring. They're taking advantage of preventable things that just lead to the boring basics. So password hygiene, patching systems. And your brain, everyone's brain, loves dopamine. Dopamine gets manufactured and you flood with dopamine when you face a novel challenge. Which means when you're faced with, "Oh, look, there's a spreadsheet of 17,000 servers that all need to be patched," your brain goes, "No, thank you. This is not going to give me tasty dopamine." So you need to find a way to trick your brain into doing the boring basics and not just create the sweet, sweet dopamine of the Hollywood style attacks. So if you can do one thing in the next year, it's find a way for those boring basics to become fun and achievable and even better, automate the heck out of it. Because automation takes away the need for you craving dopamine because the robot is going to do it for you and robots don't need dopamine.
0:33:50.3 Yadin Porter De León: That's fabulous. Well, Laura, this has been a fabulous conversation. We got dopamine in there, which is always a wonderful addition to any chat. Where can people find more about you, about what you're doing out there on the internet?
0:34:02.0 Laura Bell Main: Awesome. Well, you can find me on LinkedIn. I am erratic at posting, but sometimes there's good stuff there. So feel free to send your connection requests. The other thing is, I wrote something called One Hour AppSec, which is very free. It's just a little newsletter, and every two weeks we send you through 60 minutes worth of stuff, little videos, templates, and things for you to have a think about apps. Now, even if you're in a CIO role, don't be alarmed by it, don't be like, "This isn't for me." If you're looking to bridge the gap between what your engineers are concerned about and what your world looks like, this can be a really great way to just have a think about the types of things they might be thinking about when they're trying to protect software. And maybe that could be the start of a wonderful conversation. So if you go to www.onehourappsec.com, you can sign up. It's not a marketing trick. You can see our previous sprints. So it's one hour per sprint, so every two weeks. And then you can just get started on really getting a foundation with how to secure software.
0:34:55.9 Yadin Porter De León: That's fantastic. Well, Laura, this has been wonderful. Thank you so much for joining the CIO exchange Podcast.
0:35:02.7 Laura Bell Main: Thanks so much for having me. It's been fun.
0:35:06.0 Yadin Porter De León: Thank you for listening to this latest episode. Please consider subscribing to the show on Apple Podcasts, Spotify, or wherever you get your podcasts. And for more insights from technology leaders, as well as global research on key topics, visit vmware.com/cio.
[music]