Dalton Hahn Dissertation Talk


Dalton's Story

1_34v76e0s

I'm from Meriden, Kansas. It's about half an hour, 40 min north of here in Lawrence. But I grew up going to school in Meriden, Lived in the same bedroom for the first 18 years of my life. And then I made my way to Manhattan, Kansas. So I did my undergrad at Kansas State in Computer Science. And then immediately following that, I moved here to Lawrence and started my doctoral program.

What brought you to study this particular subject?

1_phrtrfp1

But along the way, I guess what brought me to technology, computer computer science in general was I was always a video game kid growing up. My earliest memories are playing Pokemon on a Gameboy, black and white. And just being involved in a lot of that. And then specifically getting me into computer science. I was one of those kids who didn't really want to ask my parents for that credit card to pay for games. Growing up, I found ways to acquire things online, which would inevitably break the family laptop. And so at 11:00 P.M. after my parents had gone to bed, I'd break the laptop. And I'd have to have it fixed by the morning so that they didn't know that got me into cybersecurity a bit as well. Like why is this breaking? How can I go about doing things better? Or just what are the inner workings on like why this piece of technology is breaking all of a sudden and like why are things going bad. So between all of that, It was computers networking, trying to figure out how to play games with my friends online and trying to figure out how things worked.

Dissertation Discussion

1_w2af5m0v

Let's break it down a little bit. The mishaps in micro-services portion is we're seeing modern software systems really trying to embrace what goes on in cloud computing and the capabilities and functionality that's presented there. So cloud computing has really shifted things from the stereotypical like warehouse in the back of your software engineering firm that hosts all of your software to really pushing it globally. And it's not just your warehouse anymore, it's say Amazon or Google's warehouses that are globally distributed to really make use of those environments. Software engineering itself has changed and shifted from building up these big scale blocks of systems that run in one location, to fragmenting them and breaking them down into really small pieces that communicate together. That's what we call micro-services. Essentially, you can think of micro-services as these really individual little snippets of code that when you put them all together, they form this team that collaborates and achieves your high level business goals. In that way, it's really dependent on networking a lot, which is where the global distribution aspect comes into play, but it's also on scalability. You can take, we're getting a lot of pressure on this one particular micro-service. What if we just 1010 copies of that service out there instead? In that way, you can hyperfocus where you're turning the knobs, so to speak, on your system to really get as much performance out as possible. That's the microservices part. Service measures are essentially how to manage all of these tiny little ants that are building up your colony. And how do you make them work together in a coordinated fashion. Service measures put a blanket or a mesh, so to speak, over all of these individual components. And it gets them to communicate properly. But it also adds security because developers don't want to have to worry about all of these security features, network features, anything like that. They just want to write code that does what the business wants to do. Service measures elevate that a little bit. And they say you can write code exactly how you want to write it, that just solves your purposes. And we're going to add in all of the other stuff for you. But my research specifically focuses on, okay, we have these capabilities, we have these systems, and we're noticing this shift. But where does security fall into that? Is it getting left behind? Are we missing out on things? Is it done properly? What we're really finding is that service measures while they give lots of capabilities to the microservices, they leave security in a very old fashioned manner. Essentially, they're still using lots of techniques that have been around since the early 2000s or the late '90s, to where it's really not keeping up with the pace of how fast things move in modern day cloud computing, we're missing out on lots of the features of keeping things fresh, managing them as they move in and out of the deployment. Essentially, you can scale things very quickly, you can add in new copies of your services, and you can take them away as well to save money. But the service mesh treats everything as being alive forever. If you have things that are leaving your cluster, but you still have the security credentials or certificates for those elements, you're leaving gaps open to potentially leaking those or having compromise. They can take advantage of that and do nefarious activities in your environment to I guess, wrap it altogether. We're looking at a modern shift in software engineering and how security is getting left behind by the rapid adoption of these new technologies where the design of the security systems isn't matching the needs of the domain.

How would a layperson encounter the contributions of your research?

1_wy5rwls4

I think the best way to explain this is to put into perspective some of the companies or software that you use day to day, how they're using it. If you imagine like Netflix, for example, Netflix has all of these features of things like playing videos, playing new movies, but also recommendations or different things you might be interested in based on your viewing habits. When you imagine Netflix, let's start with you just signed up for Netflix and you're ready to watch a movie. The first thing you do is you log into their system. You can imagine that that login page is a microservice. The fact of it just rendering a website to you that has a log in and a password field that can be imagined as one service. And then once you're logged in, another service would be something like your My List or whatever the tray of movies that you have available to you is. And then another service could be the actual video playing engine. All of these ways are breaking down the larger Netflix bubble, so to speak. But you as a user, as you're moving through these micro services and interacting with them uniquely across your patterns. The microservices themselves, if they're compromised or if there's like a vulnerability or something may leave you open to vulnerability as well. Your recommendations may be poisoned. You may not be recommended the latest Marvel movie. You may be instead recommended the latest DC movie if somebody was for some reason nefarious in the sense of wanting to improve DCs viewer numbers for example. But essentially a view, or you as a user, aren't necessarily the core target most of the time. What we're seeing is the adversaries you're looking for gaining resources, or gaining particular knowledge or particular data. They're trying to either collect large amounts of computers or large amounts of processing power for things like crypto mining or denial of service attacks. They view these large scale microservice deployments as good targets for getting lots of bots together. Essentially, especially because there's this central management scheme where the service mesh is laying over top. If you can compromise the service mesh, you now own a thousand or ten thousand unique microservices that you can leverage for whatever you want. Also, you can just stay hidden, let the micro services continue to operate as they would. And then when you're ready to do your attack, or you're ready to start slowly beginning crypto-mining, you can do it under the radar because the monitoring in the management scheme is not quite as transparent as it would need to be to view.

What are some of the questions or issues you sought to address in your research?

1_5igbmrpm

We really focus on what is the current state of the art in service meshes and what are they offering in terms of security as it sits right now. If you were to pick one of these tools up off the shelf and put it in your environment, what are you getting out of it and what are its capabilities to actually keep you secure? But really, my PhD focus has been on understanding that things aren't the way that they ought to be, or we're missing the mark when it comes to the current offerings in that, how can we make those improvements? As part of my PhD research, I've made a couple of different prototypes that really try and focus in on particular issues in service measures and how we can take steps towards making them more aligned with what we need in the domain. For example, one of the prototypes focuses on how can we automatically cohesively keep certificates, keys, tokens fresh? You don't want an encryption key to last forever, right? You want it to be a constrained time window of when it can be used so that if something happens or if a leak happens, you're not exposing yourself forever. Um, so in that way, good design is to limit the time window in which they're actually used. But to do that you have to say account for the fact that your mesh may or your deployment may last longer than a month or longer than a year. So to do that, we're now in the process of having to re-inject new keys or new tokens periodically to keep the cluster running. The first prototype focuses on how can we integrate that automatic freshness mechanism in service meshes while up keeping good security design? And then a second prototype focuses on, let's assume that something gets compromised in your cluster, whether it's one micro-service, or you've accidentally put a back door in one, or you've deployed something that has a back door in it. Is it possible for us to constrain that damage? If an adversary were to gain that foothold, can we constrain the damage and isolate it so that it doesn't expand so that they can't get those massive resources that they're looking for or they can't exfiltrate the data that they're really wanting to siphon off. What that one does is it essentially says, if a compromised or just any portion of the system tries to make a request to something that is against the rules so to speak, or against the network policies, we immediately revoke all of those keys and certificates and essentially put it in a box where it can't talk to anybody. So that we can then go in and look at it, diagnose it, figure out if it was just a configuration mistake, if somebody accidentally put in the wrong key there. Or if it's something worse, and it's in fact like an adversary or a malicious entity that was controlling that. Then the final prototype that I've built as part of my PhD is focusing on really complex behavior in micro-services and how we can secure those. Essentially going back to the Netflix idea of these micro-services, if you have, say, the log in service which shows you the login page, and then you have a database which stores user credentials to where it can check if your login info matches what they have stored. Let's assume that there's like an intermediary service that's in between that does the logic of taking your credentials, taking the stored credentials and comparing them. Well, that's a fairly complex procedure in terms of micro-service interaction that isn't actually captured by modern service mesh policies, by the modern rule set that they use. And what you actually do is you expose yourself to adversaries, manipulating these really complex relationships to bypass the current rule set. We've worked on expanding the rule set and expanding the policies that govern these micro-service interactions. And we've tried to integrate that as a next step in access control, or policy management within service measures, to where we're now defending against these really complex adversarial attacks.

Explain this key concept: Service Meshes

1_anykq9vi

Broadly speaking, service meshes are on the rise along with micro-services as a software engineering paradigm. It's still quite early, but what we're hoping to do is really to say, we're catching this before it gets too bad. And a good sort of analogy for this is that nobody knew that the Internet was going to be as big as it was. So when the Internet was first built, it was just built for functionality and connectivity. But now all of a sudden we're doing banking, we're doing banking, we're doing insurance, we're doing taxes online. All of a sudden we have this really sensitive data going across shared network lines. We have to patch security and we have to duct tape this system that we're all now dependent upon with service meshes and with micro-services in general. We're really hoping that these design elements that we're trying to put out there can get adopted before to the point where it's so widespread or it's highly adapted that there's no going back and we're just putting in new fences, new rails to try and constrain these systems to work how we're hoping. And we're losing out on performance, we're losing out on maintainability or usability. But broadly speaking, my research in general, I'm always really fascinated with automation and how systems work together. I'd really like to see service meshes embrace new domains. It has had a lot of issues in the past where not great security design gets thrown into these tools and into these devices that people are adopting. And security, again, getting left in the dust. Especially as we're moving into like the space domain of putting in constellations of satellites, what security are we using to actually coordinate those together? Some companies are actually looking into service meshes as a way of saying we're putting out these constellation satellite boxes that are, for the most part, pretty low resource usage, pretty low power. But we still need security because we are now an Internet service provider in space. So the service meshes they're looking into that. And I'm really interested in seeing where service measures go next because they are a really good tool for distributed networks and distributed computing systems that provides really nice features and really nice capabilities. Yeah, I'm honestly just interested in where service meshes go and how far they go because cloud computing always rolls through these new phases periodically. But at least in my opinion, I think service meshes are here to stay.

How would you describe your time researching at KU and with I2S?

1_g367khbv

I joined KU in the summer of 2018. I was here in Lawrence, on campus, at least up until the pandemic, I always found I2S, especially the networking and the computer guys here always super receptive to anything that I could have imagined I wanted. I think at this point the tally of me taking down the network is up to three. I think I also caused some headaches, but especially being a security researcher, I have been given free reign to explore, do, create, and really push the boundaries on what my research needs. I think I2S has been really receptive and really good about fostering that activity. But also I really like the research center being separate from the classroom spaces. There is something to say about being in Eaton Hall where classes are held and getting interrupted by either undergrads or sharing the lab space with people who aren't doing research and are only focused on classes. Being here in Nichols, Being here in I2S has been a great experience of having the work-class separation from my research and really being able to just have all the resources and all of the capabilities that I need for my work. But even aside from that, after the pandemic hit, I actually moved up to Michigan to be with my wife. She goes to school at the University of Michigan and she has to be in person. But most of my research is on Cloud Container Technologies and most of my work can be done remotely. And I2S has been a huge support in that as well. Just being able to be in Michigan and not physically be here on campus, but having the capability to access any of the materials or any of the resources that I need, and even conduct experiments on the live network from my house in Michigan, has been a huge help in what I've been doing and just where I am.

What led you to ask Dr. Alex Bardas to be your advisor?

1_fni6wjwp

When I was at Kansas State University, I was doing undergraduate research starting in my junior year. At the time, Alex was actually a visiting professor at Kansas State University, having just finished his PhD there. Our origin story starts even before my PhD started. Alex and I have known each other for a long time at this point. But really, I think the reason and the thing that really got me interested in working with Alex for my PhD was our relationship and our rapport that we had built during my undergrad experience. But also the willingness to let me explore and or facilitate and be there to support my interests. The work that Alex did as a PhD student and some of the work that he's been doing during his assistant professorship has been very different to my interests and my work in the cloud computing and service mesh space. Having him there to bounce ideas off of but also explore on my own has been really helpful. But I think some of the biggest advice I can give on choosing an advisor and aligning your research interests with someone is finding someone who is looking for you to succeed and has the resources to help you foster that success. You don't have to perfectly align your research interest with an advisor, but they should be at least knowledgeable of the domain that you're working in. Some of Alex's PhD research very closely aligns with the cloud computing modern tooling that I've been looking at. But more than anything, finding someone that you can disagree with, I think is a huge thing. Because Alex and I, having known each other for seven years at this point, have differences in our research interests, where we think directions should head on projects and things like that. But knowing how to not necessarily argue but disagree together, I think is a huge process of doing research. Research is hard. You're constantly looking in the dark and you're constantly running into walls that you just have to figure out how to find a way through or you have to figure out how to find a way around. And it can be extremely frustrating and really demoralizing at times to just be stuck on a problem for so long. I think finding an advisor who can help you through that, or can at least provide you new ideas or new strategies to overcome that has been a huge help for me. And across seven years, Alex and I have certainly disagreed plenty of times. But at the end of the day, we want the same thing. We want to see the research project progress. We want to see the paper get published. We want to see the graduation at the end of it. I think holding onto that has always been something that's helped me through my PhD.

What advice would you give to fellow PhD students or future students?

1_f5cltnzq

So I'll be the first to say I'm not the smartest person either here at I2S or in the Computer Science department or anything like that. And I don't feel like you have to be to do a doctorate. You have to be really passionate about what you're interested in in terms of research. And you have to be willing to fail a lot. The number of paper rejections that I've gotten throughout my PhD, and the number of times like the project hasn't panned out, or the number of times I've failed to get it running or get it working has been tremendous. I think the biggest trait that I would look for in a PhD student is perseverance and drive and challenge. But also just like a willingness and an engagement in what you're working on. Because the easiest way to hate your PhD or hate the research you're doing is just to not be interested in it. I think that that part of aligning your interests with your advisor is a huge part of that. Because if your advisor wants you to work on something that you're not interested in, or if you want to work on something that your advisor isn't interested in, the misalignment there is really going to detract from your enjoyment of being a student and a researcher, but also just your willingness and want to do the work. But in terms of like advice for writing, I even have to tell myself this, a bunch of times the reviewers trying to make the paper better. And the rejection, while it does hurt and while it does sting, ultimately is in the hopes that the paper is better and that the paper does come out in meaningful research circumstances. While it's always demoralizing to have your work criticized and rejected, hopefully through the iterations and through the revisions, it's intended to make it a better paper and make it more likely to be well respected or meaningful to the community.