<img src="https://ws.zoominfo.com/pixel/6169bf9791429100154fc0a2" width="1" height="1" style="display: none;">

Curious about how StrongDM works? 🤔 Learn more here!

Search
Close icon
Search bar icon

Token Security Podcast | Senior Engineering Director at Zymergen on Code Reviews

StrongDM manages and audits access to infrastructure.
  • Role-based, attribute-based, & just-in-time access to infrastructure
  • Connect any person or service to any infrastructure, anywhere
  • Logging like you've never seen

About This Episode

This week Jeff Burkhart, Senior Engineering Director at Zymergen talks code reviews, code review fatigue, and what to do when agile becomes tedious.

 

About The Hosts

Justin McCarthy

Justin McCarthy is the co-founder and CTO of StrongDM, the database authentication platform. He has spent his entire career building highly scalable software. As CTO of Rafter, he processed transactions worth over $1B collectively. He also led the engineering teams at Preact, the predictive churn analytics platform, and Cafe Press.

Max Saltonstall

Max Saltonstall loves to talk about security, collaboration and process improvement. He's on the Developer Advocacy team in Google Cloud, yelling at the internet full time. Since joining Google in 2011 Max has worked on video monetization products, internal change management, IT externalization and coding puzzles. He has a degree in Computer Science and Psychology from Yale.

About Token Security

At Token Security our goal is to teach the core curriculum for modern DevSecOps. Each week we will deep dive with an expert so you walk away with practical advice to apply to your team today. No fluff, no buzzwords.

Transcript:

Max Saltonstall 0:00
Hello, this is Max Saltonstall from Google Cloud and today we’re joined by Jeff Burkhart from Zymergen. Welcome, Jeff.

Jeff Burkhart 0:25
Hey, thank you.

Max Saltonstall 0:26
So you are a senior engineering director. What does that mean in English?

Jeff Burkhart 0:29
Making sure we have enough bandwidth, making sure we’re getting clear on requirements. So a lot of communication, collaboration, coordination, prioritization, balancing the strategic view with the tactical.

Max Saltonstall 0:43
Awesome and can you tell us just a little bit about what Zymergen is doing?

Jeff Burkhart 0:46
We are doing genetic research. So customers come to us with an existing microbe and they’re looking to us to optimize it, but we’re basically doing what Henry Ford did to the automobile and the which is having repeatable processes that we can apply at scale, applying technology across the board.

Jeff Burkhart 1:09
So robotics, image analysis, data science, machine learning, workflow, automation, strain, recommendation, and more.

Justin McCarthy 1:19
Cool, very cool. In some ways, it sounds like you have a problem that’s actually pretty distinct from a lot of the folks we talked to here. We’re covering code reviews today. And it sounds like actually, there are a lot of elements of your system that could potentially go wrong.

Justin McCarthy 1:33
We typically look at things through a security lens here on the token security podcast, but really, this code reviews apply to correct us throughout your code base. So I’d love for you to start us off with just a little description of how code reviews are structured in your environment. And really what you’re looking forward to typical code review.

Jeff Burkhart 1:49
So one, we review everything. So every check in requires a pull request approval. The tool we use for building context, knowledge transfer, supporting collaboration, some visibility across our groups.

Justin McCarthy 2:08
The token security podcast is brought to you by strong do strong dm easily integrates with your identity provider to onboard off board and audit your team’s access to databases and servers.

Justin McCarthy 2:21
One consistent theme that that I’ve noticed in code reviews over the years is essentially the the fundamental role of organizing the review and organizing the code for visibility. So that you actually have some chance of noticing a problem during a review, because it’s far too easy to simply pass your eyes over the code. But if it’s not structured in a way that will elicit some notice, then then you’re not going to get the same value out of.

Justin McCarthy 2:48
So can you talk a little bit about how you optimize for visibility and actually one since you guys are in biology, I’m going to I’m going to say I imagined sort of dying a cell or inserting the contrast imaging operation. So can you just talk a little bit about how you increase your odds of noticing that there might be a problem as humans are looking at the code?

Jeff Burkhart 3:11
Yeah, there’s a few aspects. So one is, you know, the magnitude of the pull request on one. And you know, we have a very strong bias towards check ins of 500 lines or less. So we’re really trying to avoid the monolithic check in. But that said, part of what we try to account for is design reviews and code reviews as part of our sprint planning.

Jeff Burkhart 3:36
Because you know, if there’s a significant body of work, people are going to have to have context and knowing that in advance, so they’ve actually planned the bandwidth. And one of the biggest challenges is, I’m working towards a deadline. I have this massive check in at the end of my project that’s now time critical, and I’ve imposed this huge burden on you and you have your own work that’s planned and you didn’t have that factored in.

Jeff Burkhart 4:03
So, you know, having enough lead time up front, so that people can build context with the design documents. They can factor it into their bandwidth, having that context established, so that you’re not coming in flat footed into the code review.

Max Saltonstall 4:18
Do people ever work in pairs? So there’s this assumption that you and I are developing a solution together. And then at the end, you’re going to check in and I’m going to review it or vice versa. So we have that context. As we go. We build it together.

Jeff Burkhart 4:29
Yeah, definitely try to maintain some continuity on sort of larger things that are strategically important. The other thing we do is we try not to be single threaded on any projects. So typically, you know, having two people and maybe there’s a third person outside of them that soon the code reviews so that we’ve got some objectivity. I think another really important factor is the I’ve been in organizations where the architects were really Uber individual contributer.

Jeff Burkhart 5:00
You know, they could get, you know, tremendous things done individually, but it’s a major gear shift for them to have more of a heads up orientation, were doing code reviews, understanding what are the major architectural things that are coming through, so that they have their bandwidth to catch those things.

Jeff Burkhart 5:21
And then we’re focusing, so if somebody is introducing a new caching layer that can be, you know, a major area to focus on, you know, for correctness and reliability and uptime and capacity planning, and you know, all the different dimensions that are needed to make that work. And so again, you know, having context in terms of the the magnitude, what the impact of the system is, and, you know, so it’s not just looking at correctness on a line by line basis in the code, but understanding what are the overall requirements for this thing, fitting successfully into the system.

Justin McCarthy 6:00
How do you distribute responsibility between the QA process, any automated testing that’s done and code review? So in other words, what’s something that could definitely only be caught in code review that wouldn’t be caught during a you know what a product manager is accepting some new product, for example?

Jeff Burkhart 6:18
So we are at an interesting place in our evolution, the engineering, the testing is all done by engineers currently. And so we’re writing unit tests or writing integration tests. And the test cases themselves are part of the code review and part of the check ins and part of what’s you know, necessary for it to be considered complete.

Jeff Burkhart 6:42
We are getting to a hybrid model where we’re building up a dedicated test team that will focus on the automation framework and will be working with the teams so that the individual engineering teams can be contributing to the framework, the challenge, we have a very strong individual team focus. And so as a management team, we have to complement that with sort of cross threaded issues. And testing is definitely something that needs to be coherent across the tech stack.

Max Saltonstall 7:14
How do you decide who’s going to be doing a given code review? Are the requirements do I need to meet a certain bar have a certain expertise to review? For a pull request you’ve submitted.

Jeff Burkhart 7:24
We do a mix. So on the one hand, we make thing you know, we just make general pull requests and anybody in the organization can pick that up, but we also do targeted pull request. So when we need somebody, you know, specific domain expertise, you know, we tag them for that.

Max Saltonstall 7:42
Do the general ones get picked up in a timely fashion, I can picture a tragedy of the commons situation.

Jeff Burkhart 7:48
As a management team. We’re kind of keeping that in the foreground because obviously throughput with pull requests is like critical for velocity and we also I think we have a very, very strong orientation towards collaboration.

Jeff Burkhart 8:08
I mean, we’re all sitting in an open area and everybody’s accessible, and really encouraging people to get up out of their seats. I have a really strong bias towards chalk talks. I mean, you know, design reviews or design documents are great. You know, there’s a need for formal meetings, there’s a need for code reviews, but you know, really supporting a culture of collaboration, where people are having open discussions and brainstorming meetings early on, you know, so our hope, really with code reviews is, by the time you get to that point, you’ve really, it’s a review for correctness, you know, of the implementation, because you’ve had the chalk talks are aligned on requirements, the design reviews have happened, the thing has been socialized, and now you’re really down to the execution of it.

Justin McCarthy 9:00
Your code reviews currently configured for for blocking or non blocking mode. So in other words, do they gate releases, or they can, in some cases conducted after release in my career, I’ve worked in both modes.

Jeff Burkhart 9:12
Right. So we were very close to gating. So the official policy is that the feedback for code review our suggestions, they’re not absolute requirements, that said, you know, reaching consensus. And if there is, you know, if people you know, if there’s a real strong difference of opinion, we will tend to escalate that to the tech lead, you know, or the engineering managers. If, if we couldn’t get it resolved workout in time, it would block the release of the feature, but it would not block the overall release. So we have a very strong emphasis on feature flags, being able to turn off features and you know, that’s useful both for managing the release, but also managing things once they get to production.

Jeff Burkhart 10:05
So we want to be able to back things out as soon as humanly possible restore the service as quickly as possible. engineers can go off and diagnose the issue troubleshoot, but they don’t have to have the system down while they’re doing that.

Justin McCarthy 10:23
So in any in any code base, they’re going to be areas of the code that are more sensitive than others. So I’m wondering how your team maintains their perception of that sensitivity gradient. So I think since you’ve mentioned robots before, if I’m describing the motion of a robot, and and they don’t do the calculus, right, that arm is going to crash, right? And, and so I don’t want to crash the arm because that makes it very expensive robots. On the other hand, if I just flood the make file, and the build fails and the CI system, well then you know, that’s recoverable before it goes to production. So can you talk a little bit about how you treat sensitivity in different parts of the code base?

Jeff Burkhart 11:01
Yeah, so we definitely have tech leads that have ownership of specific parts of the code base. Again, I try not to be single threaded on that. So, you know, sharing the domain knowledge. So, ideally, there’s going to be two or more people that are qualified for the different areas. But, you know, for example, we’ve got central authentication access control system that, you know, is critically, you know, as mission critical for services.

Jeff Burkhart 11:35
I have a couple domain owners and, you know, they’re, you know, that’s, that’s closely monitored. As it you know, I have people that are responsible for the data schema, you know, as we’re evolving the evolution of the database and the different data stores. You know, one example we have, you know, roughly plus minus six engineering teams, and they are all advancing the data schema in different ways.

Jeff Burkhart 12:12
And we have a need for basically different kinds of data access. So whether it’s a graph database, whether it’s Elasticsearch, whether it’s relational data mart, or a data warehouse. And so, I have a hub and spoke model where there’s a couple people on my core infrastructure team, that they they have the, the overview of all the different pieces.

Jeff Burkhart 12:47
And then I have an additional four people that are reaching out to all the engineering teams on an ongoing basis, or we’re talking about the evolution of the schema, and then we’re bringing that back in. And so trying to balance this model of autonomy for the individual teams, we want people to be empowered. But we also want the evolution to go forward in a coherent way.

Justin McCarthy 13:12
I’ll offer one thing we do on our team is we have automated sensitivity guard. So we write code that reads our code. And and basically, depending on what aspect of the code base is touched, and in what way, it will actually trigger process steps. So you can imagine it sort of opening a ticket that says like, hey, Jeff, you have to go look at that change.

Jeff Burkhart 13:35
Based on some high level policy, definitely imagine that going very well or very poorly. Right?

Justin McCarthy 13:42
Well, the it does, it concentrates your attention on the parts that must have the maximum focus, right, as opposed to a tweak in the CSS color change, for example, that might that some.

Max Saltonstall 13:57
Jeff or are there any moments that you recall, where you ran into something that made you change how you do code reviews or handling code and the effort and the sharing and the collaboration, something that was maybe a wake up moment.

Jeff Burkhart 14:10
Yeah. So I’ve certainly been in other organizations where it obviously, it’s easy for things that get personalized and feelings to get hurt. You know, it’s easy to focus on nets, which can be fine to be pointed out. But so one thing that we do here is we have a process where we basically have a code review meeting where we onboard new engineers, and we bring in its a mix of current engineers that are familiar with our process and engineering manager is running the meeting.

Jeff Burkhart 14:46
And new engineers and we have a brainstorming session about you know, what, have you seen that that hasn’t worked? You know, what has worked, what do you think are some of the points and then we socialize, you know, what, we’re looking for which is fundamentally, the culture around code reviews is, I think, fundamental to the health of the entire engineering organization.

Jeff Burkhart 15:13
So if you you can have healthy discussions around designer views in code reviews, things will break down, you know, as a direct consequence of that we have you know, if there’s a major architectural consideration, you know, highlight that and socialize it before it comes time for the pull request so that people have context that are that’s built up.

Jeff Burkhart 15:37
If the code reviews going to require some deep domain knowledge, then have those discussions and have those people in the designer geez and have continuity so that people aren’t flat footed when it comes time for that you know, when the check in happens, can be important to point out when things are suggestions versus, ‘hey, you know, I think this would be a really important thing to address.’ And and again, socializing that, you know, in our experience makes a huge, huge difference.

Max Saltonstall 16:10
I was curious how you also make sure that code reviews are conducted with respect and civility. And that sort of inclusive it because with a new person coming to an established engineering group that has their own ways of working and talking and collaborating, it can be very easy to feel like the outsider who doesn’t know what’s going on or doesn’t know how to talk and how to contribute. And you can make a team much more effective by letting people contribute more quickly. So how, how have you enabled new members of your engineering work to get productive fast?

Jeff Burkhart 16:42
Yeah. So when we have an onboarding, buddy, so we pair people up, we have an onboarding checklist. So we have a standard set of things that we do, including architectural overview and tech talks, etc. And then we encourage them to do a check in, you know, within their first week or two of being here on site. And so and again, I doing that with your coding buddy.

Jeff Burkhart 17:10
So, you know, you have the orientation meeting and you small check in and you just walk them through the process and just normalizing it. Because there’s nothing like going through the process. And you know, people just, you know, it gets normalized and that, yeah, we do give feedback, you should expect comments, comments are great, you know, it’s actually going to make your code stronger. It’s not that you failed in doing this. And if there is, even if you do get a large magnitude of unexpected feedback, hey, that’s a great point for follow up, right? So go talk to the person, you know, we’re not going to like bath this back and forth, you know, with our keyboards, you know, this culture of engagement and that engagement is appreciated.

Jeff Burkhart 17:57
And so again, socializing that so that people can know that it’s expected. It’s not overly personalized. And we’re really looking towards this kind of lean in model. You know, where people you know, we appreciate the engagement or appreciate the early chalk talks. We appreciate the questions. We appreciate the feedback. Feedback is a gift. Not everyone knows it, but it is. Yeah.

Justin McCarthy 18:21
So Jeff, as as we all know, from a security point of view, the storage of the 90s and the 2000s ease or see buffer overflows. And that was a byproduct. Everybody knows that’s, that’s a byproduct of the underlying architecture as a byproduct of language. But thankfully, tools came along and static analysis tools made, made everything go away and all problems were solved. So convince me that we just that we just can’t just add a tool somewhere in the mix here and all of our problems will go away.

Jeff Burkhart 19:00
There is an interesting tool we’re currently evaluating from get prime that provides visibility into the code review process itself. And you so you can see which pull requests are waiting? Or, you know, or in the queues with the lag times are what’s the distribution of people getting pull requests, what’s the, you know, change rate, you know, with check ins, you know, it measures you apparently has measurements around thrashing. And so I think that’s pretty interesting, right? I mean, a static analysis, you know, code coverage tools, they, you know, have some value. They do what they do, as we all know, there’s no substitute for human intelligence domain, knowledge, domain, you know, sense of ownership of different areas of the code base, but again, you know, support for the code review process itself like, you know, What’s our throughput?

Jeff Burkhart 20:00
Why are things getting delayed? What are people where people overloaded? Where Where do we see the volume of changes that are coming through having that level of visibility on the code base, I think is add to that as I’m pretty great dimension to the ability to support the process.

Justin McCarthy 20:22
If I could just put out a request to the open source community and then the tool universe i would i would definitely either pay for or thank profusely anyone who builds a tool that can accumulate the number of accumulate the seconds that is have looked at a line of code across across the code editor and route browsing in a web view. I would just I would love to know because I would love to be able to see like in my editor today as I click on the line and I get a nice blame. That reminds me that I wrote that line. But also love to know and someone else has had this open on the screen for you know, five people have looked at it for a total of 14 minutes in the last six months. So anyone that wants to build that I’m your first customer.

Justin McCarthy 21:03
Yeah, I think, you know, unintentional obfuscation is is definitely an evil. I mean, I’ve seen people that were absolutely brilliant could write really terse code that was, you know, maybe even completely correct. But if it’s not maintainable if other people can’t read it, the legibility you know, clarity, I think that’s a major requirement. And feature, you know, of the code base itself. Do you have ways do you have automated tooling to do things like conforming to style or reducing, you know, noticed anti patterns so that you can automate making that code more readable by future engineers and your team? Because I think that that thing that a human can do that a computer can’t yet is, is it going to be a real pain for me to modify this code in a year?

Jeff Burkhart 21:54
Right. You know, we have some code coverage tools, we have some style check tools. So we do the basics. I’d say we’re not, I we’re not particularly sophisticated in our depth of usage of those tools. We’re really in an environment where we’re moving pretty fast. There’s a very strong orientation towards agility. We’re working directly with the scientists and the scientists are developing the science as we go. So it’s, it’s truly a culture of experimentation. Let’s handle this airplane while we’re flying there. Uh huh. Yeah. So, you know, iterating on the data model, you know, figuring out, you know, different ways of approaching it, you know, we have to refactor things, you know, and we know that consciously that this is an experimentation phase, and it isn’t hard and yet, and so we really kind of leaning into the agility. Why do code reviews at all if speed is such an important factor? Shouldn’t you throw them out? Right so then we do have a hybrid, right?

Jeff Burkhart 23:02
So just again, we have part of our system needs to run like a factory. So we have like, I’m responsible for our core infrastructure. So databases, data services, search, job execution, and the other five, six engineering teams are all dependent on the solidity of that code base. And you know, access control, obviously, is is a big part of that. My, the, the software from my group, neat, you know, we’re running on a train model, we’re releasing every two weeks. And then I have other teams that are kind of doing this leap frog model. Their software leverages the features that are in our core components. So I release on a two week basis, there now releasing on demand on top of that code base. The core infrastructure gets released in the next two weeks.

Jeff Burkhart 24:00
They build on top of that. And then they’re responsible for the quality of their own code and and how much they want user acceptance testing and you know, how the frequency of their own releases. And so really working at this sort of hybrid model of stability, you know, security is rock solid, the databases, core services. You know, we’ve got SLA s and uptime, you know, hard uptime requirements and you know, 24, seven everything. And then the agility for the data science and computational biology teams. Sounds good question about the 500 sort of line guidance for pull requests. To me 450 lines seems like a lot of code to try to internalize to read to understand and to see the interactions of my gut, or my my past experience would say, I’d like it to be under 100, please, and that’s not always going to be possible. But if someone’s writing a 500 line, pull request, I feel

Max Saltonstall 25:00
There’s probably an opportunity to break that up into more digestible chunks. How did you get to that number? How often is that the guideline followed Be frank about it. The It was here before I got here.

Jeff Burkhart 25:12
When I was here, originally, the company was founded by our CTO, and like three other, you know, core engineers, and they lay down, like a lot of the core infrastructure. And part of that was his sort of style guide of a, you know, 500 Yeah, it could be smaller, but as an upper limit.

Jeff Burkhart 25:36
Now, that said, I’ve had engineers it really struggled with that. I mean, they’re like, I’ve got this, you know, 3000 this 5000 line, you know, monolith draw city. Yeah. And, and that, you know, again, the 500 limit, like really served as well. I mean, it’s like yeah, and, you know, you need to break it into pieces.

Jeff Burkhart 26:00
That are digestible. And and so I agree, I think 500 lines itself can be a lot, but at least gives us a reasonable envelope to work within. Do you have any stats on agility on timing? So you could say, look, if it’s under 100 lines, it tends to get reviewed within 12 hours. And if it’s between 104 hundred lines, it tends to get reviewed in 36 hours. Yeah, I think that would be a great metric. We don’t we haven’t gotten to it ourselves. I think, you know, again, tools, I get prime, you know, give you that visibility. And I think there’s some really interesting things that come out of that. Yeah.

Max Saltonstall 26:40
Are there things that you wish you did slightly differently around sort of the code review and especially securing your code, you know, and making sure it’s correct that you haven’t done yet or that you’re trying to do now?

Jeff Burkhart 26:53
Yeah, you know, one thing that’s that can happen, or there’s a couple but one is code review fatigue or design review fatigue? Oh, yeah. Right people going around the block on on the design review, and they kind of I get worn out, and they like, okay, whatever. And then, you know, it gets down to the end. And the code review. You know, everybody’s worn out at exactly the wrong time. When you don’t give the code review the attention it deserves. You mean. Yeah, yeah, they just Oh, yeah. Well, I saw that design review. Just fine. rubber stamp. Yeah. Yeah.

Jeff Burkhart 27:31
So again, I think, you know, keeping enough momentum so that, you know, because obviously, when it gets, you know, code reviews, or, or when it’s down to the wire, right, I mean, it’s like it’s actually making it into the code base. And if you’ve kind of worn everybody down, whoring yourself down, when you get to that last mile, that’s actually when you need to, you know, have enough momentum to finish the race, right? I mean, it’s like, okay, like, we really need to be looking at it now. So that’s one, I think.

Jeff Burkhart 28:10
The other is, again, it just gets down to a cultural issue of you know, as humans were pretty conflict avoidant, and if you think someone else is wrong,

Justin McCarthy 28:24
I disagree, Max.

Max Saltonstall 28:27
Thanks, Justin.

Jeff Burkhart 28:29
So, you know, there can be you know, hurt feelings there can be you know, people that are trying to avoid each other. And I think it’s really really important to just, you know, kind of work through those things as proactively and ongoing and never really assume that they you just just always have that edge of ik we can get better about code reviews, that it’s really important to keep the culture healthy.

Jeff Burkhart 29:00
Don’t take it for granted.

Jeff Burkhart 29:02
You know, one problem with sprint planning or agile and in general is it can get tedious, right? I mean, if you’re having week sprint two weeks sprints, and you’re doing that 52 weeks a year, without, you know, well defined goals or something to celebrate.

Jeff Burkhart 29:23
It can just, you know, get to be a death march, if you if if it’s not sort of, you know, managed effectively. And so, coteries can be the same thing, right. It’s like, Hey, we’re doing this day in day out, how do you maintain some sense of momentum, some sense of accomplishment, some sense of where we’re at, you know, in the cycle, so we actually have something to celebrate, there’s actually an accomplishment because I think that’s a real issue. Totally.

Max Saltonstall 29:58
How do you deal with conflicting checkins? Where someone over here is doing some other code to actually try and change the same thing. No one realized that ahead of time are people then start gnashing their teeth?

Jeff Burkhart 30:09
Yeah. So again getting is really really supporting people in getting up out of their seats talking to people socializing things early on. When you see the conflict, then you know, again, it go up, lean into it. Try to catch these things as early as possible. That is, I think a huge role that managers and tech leads can play is having enough visibility so that we’re not developing things in silos, duplicating efforts, stepping on each other’s toes. Cool. Thank you very much was a pleasure talking to you. Good. Thank you.

 

 


About the Author

, Chairman of the Board, began working with startups as one of the first employees at Cross Commerce Media. Since then, he has worked at the venture capital firms DFJ Gotham and High Peaks Venture Partners. He is also the host of Founders@Fail and author of Inc.com's "Failing Forward" column, where he interviews veteran entrepreneurs about the bumps, bruises, and reality of life in the startup trenches. His leadership philosophy: be humble enough to realize you don’t know everything and curious enough to want to learn more. He holds a B.A. and M.B.A. from Columbia University. To contact Schuyler, visit him on LinkedIn.

StrongDM logo
💙 this post?
Then get all that StrongDM goodness, right in your inbox.

You May Also Like

Day Two Cloud 134: Simplifying Infrastructure Access With StrongDM
Day Two Cloud 134: Simplifying Infrastructure Access With StrongDM
StrongDM takes a proxy approach to the challenge of access and authentication. It uses a local client that can run on a Mac, Windows, or Linux device; a gateway to mediate access; and an administration layer for setting policies and permissions and auditing access.
Daniel Leslie at Namely on the Human Side of Security
Daniel Leslie Director of Security Intelligence & IT Operations at Namely on the Human Side of Security
This week we are joined by Daniel Leslie at Namely who shares his take on the human side of security, and what security at scale looks like for his team. Max, Justin, and Daniel discuss the 3 core things to good company-wide security: psychological safety, vulnerability, and purpose. You have to address these things in a comprehensive manner.
Alan Daines CISO at FactSet on Phishing
Token Security Podcast | Alan Daines Chief Information Security Officer at FactSet on Phishing
In this episode Max Saltonstall and Justin McCarthy are joined by Alan Daines, Chief Information Security Officer at FactSet to talk about phishing, educating on it, and defending against it.
InVision Talks Secure Code
Token Security Podcast | Johnathan Hunt, VP of Information Security at InVision Talks Secure Code
In this episode Max Saltonstall and Justin McCarthy are joined by Johnathan Hunt, VP of Information Security at InVision to talk about pen testing, bug bounty programs, and secure code.
Token Security Podcast | NYC Cyber Command
Token Security Podcast | NYC Cyber Command
This episode Max Saltonstall sits down in Manhattan with Quiessence Phillips, Deputy CISO and Head of Threat Management, City of New York and Colin Ahern, Deputy CISO, City of New York.