RIPE 84

Daily Archives

.
Plenary session
.
RIPE 84
.
Tuesday, 17 May 2022
.
11 a.m.
.
Green Web
.

PETER HESSLER: Hello, everybody, welcome to the eleven o'clock session. My name is Peter Hessler and I'll be chairing this morning's sessions with Jan Zorz, it's nice to see so many faces in the room. But I would like to see a little bit less of your faces if you know what I mean, there are masks available out in the waiting room and there are self‑tests if you like to make sure you don't bring home some little friends with you.

First off, we have Chris Adams from the Green Web foundation talking about a fossil free Internet.

CHRIS ADAMS: Hi. I assume you can hear me okay? Right?
.
All right. Hi folks, my name is Chris Adams, I am the executive director of the Green Web foundation, an NGO set up to work towards a fossil free Internet by 2030. I am also an organiser of an online community called climateaction.tech and editor of a magazine called Branch which is all about the intersection between climate and technology, and I'm the Chair of the Green Software Foundation Policy Working Group.

I am going to barrel through this talk relatively quickly, so if you miss anything I'm saying there is actually a link that you can follow here through to our website with a link to the entire deck with all the kind of words that I'll be reading off as I run through this.


.
In the limited time I have with you, I am going to cover threes three things. First I'll give you some background on climate to help set the scene and our place in this.

Then I'm going to introduce you to something which I'm referring to as a fossil free Internet and then point to some recognisable emerging qualities that I think may be of interest to you. And then finally, I'm going to make a case ‑‑ I'll cover how to make a case for this when you are actually at work.
.
So it looks like most people are sitting comfortably so I'll guess I'll begin.
.
What's a tech conference without a cartoon these days? There is one for everything. This was released earlier this year, and I find it really interesting and useful to help me think about the climate crisis basically or what's happening here. I turned 40 this year, and that kind of ‑‑ if you look at this map, we'll see how, over the last, say, like my entire lifetime, we have basically seen this kind of rise in emissions. We have seen there are certain people who do well out of it and certain people who don't. And there may be reasons why we haven't been moving as fast as we core moving.

If we were to kind of refer this to something which is a business close to home, what's happened in the last twelve months you can see these same kind of charts here.
.
There are certain groups and there are certain people who do really, really well out of us not doing much or thinking too hard about this, and it works very well for some people, but for some of us if you pay energy bills at all, you might kind of feel that maybe I'm not doing so well out of this and maybe this is something we should be doing about now, which is really affecting us now.
.
And I think I'm going to show this in massive letters, it might seem obvious but basically while we all talk about core things like efficiency and that, one of the reasons we had this kind of background level of kind of angst around climate, is basically because it's not that we don't have alternatives to get off a fossil fuels, it's because most of the time we haven't been able to find a clear path often, basically. But there are paths off them, for example.

Around May last year, there is a group called the International Energy Agency, they published this ground‑breaking report detailing, for the first time ever, a roadmap to get the entire energy sector, which is basically what we rely on a lot of the time as engineers and Internet technologists for basically powering everything we do, they published a roadmap to get the entire sector onto the same 1.5 degree pathway that we see scientists talk about in the news again and again and we see children talking about in the news.

Now, this is interesting because the IEA was not a radical campaigning organisation. It was initially set up by Henry Kissinger in the 1970s to ensure a stead supply of cheap crude oil as a response to royal producing countries in the Middle East restricting how much oil they produced during the oil crisis. It's become influential over the last 50 years and countries basis they're industrial strategy on what these projections say will happen.
.
And in this report, it's interesting because they share a roadmap that basically says yeah, you know the science guys, they are right, we really need to do get off fossil fuels. Remember, this was an organisation that was setup to supply a steady supply of oil for rich countries, and they said no more exploration from now on. You that's literally spelt out on their report on the bottom left‑hand side.


.
In this report they lay out a clear pathway to get to kind of basically net zero and 1.5 degrees of warming and basically avoiding the worst of climate change and along the way we end up with an economy, that's larger, more efficient, and healthier and richer and while fossil fuels aren't phased out entirely; they play a much much smaller role.

It's a positive standing future. And as you can see, there is ‑‑ basically you might ask yourself, what part does the Internet play with this? Where are we in this?
.

.
Now, generally speaking, the Internet or the tech industry, you can think of being roughly the same size as, say, the shipping industry or the aviation industry in terms of, like, carbon emissions, basically.

And I think we need a fossil free Internet. And if you wanted to tell your boss why you think we should have a fossil free Internet, these are the main reasons I think we would want to have this.

We're in a climate emergency. So, a fossil free Internet is obviously going to say carbon, it saves lives because there is an undeniable toll from the bad air quality that comes from burning fossil fuels. It saves money, because if you say bills you see how the cost has gone up. And then if you have people who are ‑‑ actually care about this stuff, generally greener firms tend to keep people longer, and finally, well you can see this whole war going on in Ukraine, which is related to this and how we end up funding essentially a war machine by default because we are relying on fossil fuels so much.


.
So I have explained why you might want to have a fossil free Internet. Let's talk about what a fossil free Internet looks like or what are qualities you might look for it.


.
So, I kind of think that the ‑‑ I have tried to come up with a memorable way to think about this, or some ‑‑ a way that you might kind of look for things to help you identify what a fossil free Internet might actually be. Partly based on ideas that you see in a web accessibility movement where they talk about poor being perceivable, operable, understandable and robust and I have come up with gold which stands for green, open, lean and distributed. I am going to run through these in the time that I have here.


.
First of all, green, kind of obviously, means green energy and green inputs, as we see we need energy to run computers but also to make computers, and networks.

And every time we use the Internet, for example, even if we don't mean to, we're basically burning fossil fuels because the Internet is the ‑‑ currently the biggest machine in the world and it runs on fossil fuels, predominantly coal right now. We really do need to stop this and this is something that we can do, but it's ‑‑ I kind of feel that if we're developers or engineers, in the same way that we don't really need to build the way with fossil fuels any way than we need to build houses with toxic materials, but generally speaking when you are using power, a chunk of it will be coming from things like gas and coal right now, even if you are buying a green energy tariff.

This actually will change depending on where you are in the world and we are in the Internet. Has anyone heard of this website here at all? Electricity map? Okay, one or two.


.
So I find this map really, really useful. It basically shows you how green the grid is at different parts of the world. It's an open source project, and they basically pull this data from various open data sources, and this can be used for you to figure out where you might choose to run infrastructure or run particular work loads for a kind of lighter car upon fingerprint, Google does some of this.

You can also see how different decisions people have made can change how the grid works. For example, you can see France, full of nukes, really, really low carbon most of the time when they work, all right. You can see Poland, really, really big in coal, much, much, much higher carbon intensity and nice versa. You can see a bunch of this stuff. Germany, we are the land of coal and solar. Question of got this weird mix, generally speaking ‑‑ I find this fascinating myself.
.
That's one thing.

Now, if you did want to run on kind of green energy, the way we tend to use it is if you are kind of looking at a website, you are choosing to buy, say, infrastructure that's running on green energy, what you are normally doing is you are basically investing some money into the grid to make sure that you are essentially shifting us away from fossil fuels to something greener. So, in the long term, this sends out making the group greener for everyone you typically look at this on a yearly basis. So even if you are using, say, like a green tariff, there are certain times when you are not entirely running on green energy and this kind of makes sense. Solar panels obviously aren't going to be effective at night by comparison.


.
And it looks a little bit ‑‑ this is basically one the things that we do. My organisation, we track the transition of the web away from being entirely run on fossil fuels to something which is not running on fossil fuels. We have been doing this for the last 15 years and we track how people account for this and generally speaking, what we count as green is if they have accounted for all the emissions from running the infrastructure that they are basically delivering web services with.


.
Now, it's worth referring to this, that I said that you are not always running something on green power, for example. So, if you say, well, half my energy this year is coming from green power, it's not necessarily 30% continuously; it's usually going to be like you have these fluctuations like you see here. I am sharing this because this is something you might come back to later on and they are increasingly time‑stamped clever ways to account for this stuff. Especially as we have seen like the cost of things like say batteries and solar and stuff come down, it's actually increasingly likely to be possible to run your infrastructure without needing to be reliant on the grid so that you can basically time‑shift green energy to run it locally.


.
Now, I have spoken about like running something on green energy. But that's not the only source of emissions when we think about the Internet, basically.


.
There is obviously a carbon fingerprint from running things but also a carbon fingerprint from making things and depending on how much you use a device, that will actually be somewhat different. For example, these ‑‑ what you are looking at here, this is a chart showing how you have, say, embodied emissions which are turning from sand in the silicone chips, that needs a lot of energy to do this, and this usually comes from burning fossil fuels again. You might see things like networks and data centres where because they are on 24/7 actually what you choose to run it on has a much greater impact.

So this is one thing we need to be aware of. You need to be thinking about the entire lifecycle and this also actually gives you an argument for saying, well, if, for example, you got some user devices, one thing you might want to do is actually just extend the life of that using it for longer because proportionately a significant chunk of that is actually going to come from the making process rather than the kind of using, the using phase of this.


.
So that's kind of green and green energy which is obvious. The other thing which I think is interesting is the idea of open. So open in its approach, not just necessarily Open Source, because if we are going to make data informed decisions about how we might end up with a greener, more advanced Internet, then you need to have data for a data‑informed decision. Now, with the example I spoke to you about things like hardware, there is some interesting projects, this is a company in France, they are collecting all the numbers of products they can find and they will let you figure out the emissions associated with maybe your device that you have, or your servers or something, and then based on the lifetime get an idea of at what point you basically how ‑‑ what impact your choice of energy might be compared to the choice of the energy coming from making it, for example. This is really, really useful for you to understand whether ‑‑ when you are trying to figure out how I am going to decarbonise a set of tools. Do I choose a greener energy or choose to not buy so many servers in the first place, for example?

The other thing that's interesting about open is, if you have open data, then you can put it in lots of interesting places. This is an organisation called Ember, Ember, they collect emissions data from pretty much every country in the world and they create it and make it available in useful datasets. So you can find out the carbon intensity of electricity anywhere in the world, how much of it is coming from, say, coal or gas or something like that, and they use this to inform policy, but because it's open, you can do some other things.

So one thing that we have been doing with RIPE, is we have been building some carbon intensity APIs so we can annotate any public IP addresss with carbon intensity data. Rather than just using a map, we get an idea of what are the emissions from running stuff in one place versus another place.
.
There are obviously caveats related to this. This is only going to be as good as the data that you rely kind of any geolocation stuff on. And we are currently using MaxMind for this, but I think this is the kind of direction we'll be starting moving towards. And I'll point to some examples of this because this is something that we're doing as an experiment for anyone to use, but some other organisations are already further along with this.


.
So, Google is an example of this. Google have basically been ahead of the game on this in quite a few ways, they have invested in having ‑‑ running green energy for a lot of the time and they have been building tools that make it possible for you to look at your infrastructure and work out where you might run a workload based on things like the carbon intensity, the latency requirements you might have or the cost requirements you might have and I am afraid you can't see it quite so well in here, but these kind of criteria are new criteria you might take into account when you are looking to basically decide how and when you might run any kind of service or any kind of application.


.
Open is also interesting because I think it allows new ways to think about how would you remember using digital infrastructure. What you are looking at here is the heat exchanging part of a new data centre in the Netherlands using a bunch of open compute servers. So open compute is a project by a number of large organisations like Facebook and Microsoft, so basically build open source server components partly for helping them run their own hyper scale data centres. But this turns out these are useful in lots of other places and this is a really good example. This is the part of a data centre that you see from a company called Blockheating. Blockheating, rather than having a massive out‑of‑town data centre the size of a football pitch somewhere, somewhat they do instead when you have this out‑of‑town data centre, all the excess heat you spent huge amounts of time and money to keep things cool and vent into the school. What they do is they have lots and lots of smaller data centres. They have ‑‑ they takes lots and lots of end‑of‑life servers from companies like Facebook and Microsoft, put them into shipping containers, they connect these to greenhouses that grow crops like tomatoes and cucumbers. Now, greenhouses, they will usually need to kind of get the temperature hot enough to grow tomatoes and they will be burning gas to heat up these things. What they do is they take the waste heat from these servers and, rather than vent them into the sky, they come put them to use.

This is kind of cool. This is using waste heat productively turning what was a problem into something that is valuable for other people and I think that having like an open chain, open kind of tool chain makes it possible to come up with ideas you otherwise wouldn't have seen really and I think this is also useful because it extends the useful life of hardware for example.

So that's open.

Now let's talk about lean. So lean is about making resources you do use count and not wasting things needlessly.


.
So, in Germany, has anyone heard of the Blue Engel certification before? Earlier on this year, the Blue Engel ‑‑ there was a first every Blue Engel, Blue Engel certification for essentially like eco‑certified product, it was an open source PDF reader called Occular and this is now ‑‑ this is interesting because (a) they run ‑‑ in order to actually get software certified, it's run on really, really old software. In order for you to get some software certified it needs to be running on five‑year‑old hardware and increasingly these kinds of certifications and ideas are being worked into how public sector organisations actually buy and large services these days. So while this is a kind of like an interesting idea now, it's increasingly seen as one of the tools that groups will be looking to to reduce their own operational emissions to, basically, in line with the sites.

It's interesting because there are certain kinds of, say, closed sourced tools where they have weird spikes of inefficiency. So if you were to compare Lever Office with another well known commercial operating system, one thing that's come out from projects like this to analyse this stuff, and this is some of the work with SoftAWERE, which is a German kind of tool, a German project to measure these, they found out that one of the big inefficiency was a code that blinked cursors that ended up causing massive spikes with energy uses inside the computer. This is one thing you discover from this when you start applying this kind of analysis to tools like this.


.
If you don't work with desktop machines, you might work with the web. There are tools that allow you to kind of apply this lean idea to what you are building now. So green frame is an example where they'll basically take a series of docker containers to represent parts of the infrastructure that you might be running and they will measure each of these and they'll put this into tools like continuous integration. So when you are working you can see roughly the energy associated with maybe a particular usage only going from a page to page to making a form submission, for example, and then every time there is a new pool request for a new commit, they'll show the difference. So what you see here is an example of a pool request that's basically put a bunch of new energy on the screen and the network, for example, and, with each of these, you can dive into, say, the database level or the server level and so on. So there are tools that allow you to kind of embed these ideas into how you build digital services now.


.
Can you apply this to networks? I think you can apply the ideas of like lean to networks, even if there isn't a kind of one to one matching between say watching a ‑‑ downloading something and there being a kind of one to one ratio between how much ‑‑ what kind of emissions are being caused by you doing that.

This is a project called Seismic that Facebook and a number of groups have been funding in off‑grid locations, particularly in Peru, where you have off‑grid connectivity that is running on diesel a lot of the time. There's been some interesting work to ‑‑ to see if you can modulate the power used by various tools, by the infrastructure you do have. So you can kind of match it to demand rather than just having a steady amount the entire time.


.
So, this has made it possible to basically run off‑grid connectivity on things like solar and storage. You can see, for example, at the very top there is the weather, that might be taking place here. Now, there is second bar down, the telecom site power usage, you can see when it was particularly sunny, there is a spike using more because it produces more energy around, when it's dark you bring it back down and use this kind of 2G only mode just so there's something available, just so you don't ‑‑ there isn't that much usage there. When you have a place where is there is less solar, for example, you might scale down to a slightly lower power use. You can see at the bottom that people are using battery to kind of act as a counter to images coming from solar in this case. It might be that you top up batteries to make sure you have got enough to last you for the night and so on. And then when you have periods where you are not going to be expecting so much power you'll have some kind of a control dimming thing to run through this. These are ideas that run in off‑grid places to provide continuous connectivity to match the usage.


.
I think we can apply these ideas to how we run infrastructure ourselves in the rest of the Internet, for example. And I'll touch on these some examples just a little bit later on.

We have run through G, O, L. Let's look at D, for distributed. So, I think this is one thing that is of interest because there is a new kind of trend in how people are actually choosing to run infrastructure or when they might choose to run infrastructure, because depending on basically the conditions of the grid that you are using, the same piece of software can have either a high carbon intensity ‑‑ it can be greener or not so green based on when you choose to run it.

Has anyone heard of the baking forecast here at all? All right. In the UK, there is a project where basically people who want to bake particularly green cakes or green loaves of bread have basically hooked into an API so they can see when there is going to be a lot of solar and wind on the grid and gas. And they'll basically say today is a good day to bake or not. And this is actually an example, it's kind of like silly but quite fun example of how the carbon intensity can impact things that you actually do, and if you go to Twitter you'll see baking forecast to use this yourself. It's only for the UK, I don't know if there is a baking forecast in Germany but if you are in the UK and you want to bake a green cake, then this is an API for you.


.
You can do this in other places as well. We run a magazine called Branch and we do something like this for the magazine itself. We know the grid powering our servers is green and we have taken steps for that but we don't really know about what the grid might be like for other people in the UK.


.
So, when we know there is lots of energy on the grid, and we know that sending data over the wire is going to be kind of green, we'll send a full fat rich experience down the pipe to our users because we know the energy hops along the middle and the energy used about I their computer will be kind of green, we'll have videos and nice images, however if there is lots of fossils fuels on the the grid we'll scale back some of the design and make the most of the carbon budget that we do have.


.
While people use things like page wait budgets to make sure pages load quickly and offer a nice experience, you can do the same thing for carbon. We basically adapt the design to have a lighter weight in images free version. You can still access the same content and download a page but we don't default to showing it. This is actually good from an accessibility point of view anyway, because if you want to have say websites that are accessible in Google or anything like that or people who are maybe partially sighted then you probably want to show this stuff anyway. This is a nice way to make it front and centre.


.
There is something you can do and want to do later on is experiment with things like web workers or tools that can see when energy will be greener and in the background pull stuff down, even when you have loaded fossil future on the grid, if you have fetched everything beforehand you are okay. This is something that Microsoft Windows update is actually incorporating themselves, of moving processes in time to go for the greenest possible decision.

This also works with data centres as well. Google ‑‑ data centres have some of the highest energy‑intensity per square foot in the world. They can also respond quite quickly to changes on the grid depending on what the grid conditions are like. Google do this.

They have a massive fleet of servers and they'll basically adapt the machines they have running and the jobs they are doing to match the conditions on the grid because, in many cases, it makes economic sense as well as being green.

And this is what I mean by examples of the grid changing and it making sense to move data ‑‑ move work around. So, this is probably an extreme example, but in Texas, you have cases ‑‑ you had a case earlier on this week where, on one side of Texas, you basically had ‑‑ you'll be paid 2,500 dollars to basically use a magnificent watt of power, but on the other side of Texas, you'd be participating three‑and‑a‑half thousand dollars to be using that power, partly because on the left‑hand side, there is loads and loads of wind because it's a particularly windy day and they need to get rid of it because, if you have too much, it's basically damaging to the grid. So it makes sense to just pay people to use it than turn things down. On the other side there is ‑‑ maybe it's really, really hot and you have got loads of fossil fuel generation, you have got loads of people turning on AC and you have got a demand crunch and people struggling to meet that demand, so they have their price goes high. Most of us are not exposed to this, but these are the kind of trends we'll see more of and there are organisations that take advantage of this now.


.
So, that's this idea of distributed, for example. And if you are a network engineer, you realise that there are limits on the speed of light. You can only move things around so fast. Just the same thing with energy. There are literally issues with transmission, just the same way we have issues with connectivity.

And when you think about this at a network level, there is some interesting work from path‑based networking tools that have basically started to become more prominent. The example I refer to like here is an example, for example. So, let's say I am in the UK and I want to access a website in, say, in Poland and, for example, if I'm going to do this I need to hop through every single country to get to this or most countries and every single time the carbon intensity will be different as I run through this. So France is kind of going to be low carbon but Poland and is high and if it's Germany and it's not a particularly windy day, for example, we might be using a lot of gas which means there is going to be a fingerprint there.

I can use things like using a CDN to serve most of my content from somewhere closer. And that's going to have some impact, but I still need to be getting some stuff and going through particularly kind of like middling areas. If I had some idea of path awareness, then I could do something like a kind of low carbon trick shot sending it around the greenest part of the Internet to actually get to, say, Poland, for example, and this is one thing that you often see with north of Europe really, you can do this kind of stuff. And these ideas are basically being implemented by an organisation called SCION, who ‑‑ a protocol called SCION, which is probably the best example I can see from this.

This is a chart from a recent paper showing this. Kind of maybe get half the energy usage compared to, like, BGP. This is obviously relying on the entire Internet using them instead of BGP. It gives an idea that there are possible savings there.

That's gold ‑ green, open, lean and distributed.

I'm going to make the case for a fossil free Internet. Assuming these ones here aren't like all that attractive to you, but maybe that you don't care about this but not ‑‑ it might be not one you lead with in an organisation because they might not have the same priorities you might have, the thing you can do is make the argument that it's so much cheaper now. In the last two years, you have seen lots of organisations doing this stuff because the cost has come down so much of green energy, and that's especially the case in the last, say, well, six months, for example. You can see how the cost of green energy, this is basically showing a percentage, so in the last ten years, the energy is about 10% of what it would be. If you look at where we are now right now, the cost of coals and gas is multiples higher. There is a good argument to have something cheaper in that sense.


.
And I think that's all the stuff I have time for. If you are interested in any of this, my organisation that I work for, we provide training consultancy, and if you have any kind of green services that you provide, contact us so we can get them listed when people ask for this.

If you care about this, there is a community called ClimateAction.tech, which is very, very well known ‑‑ which is really, really nice and supportive and these are the ways to reach me.

That's me, folks. Thanks.

(Applause)

PETER HESSLER: Thank you. We are running a little bit tight on time so I'm going to cut the Q&A queue right now, but first, Robert, if you would like to go to the mic. Okay, there is not a Robert, so Gordon, please come up.

GORDON: Thank you for the talk, very interesting.

PETER HESSLER: Give your name.

SPEAKER: Gordon. I am ‑‑ so, thank you for the talk, first of all. I am quite interested in ‑‑ you mentioned that some companies couple power generation, renewable power generation capabilities with data centres, and sometimes these power generation capabilities might exceed the consumption of the data centres, I am wondering if this is becoming sort of a new norm if we should expect to see more renewable power generation capacity to be installed next to data centres that might even exceed the data centres's needs turning them effectively into power plants with compute capacities.



CHRIS ADAMS: Yes, absolutely, this is totally a thing that's happening more and more and it makes economic sense a lot of the time, because you can basically get paid to turn off data centres as well as, when there is a lot of demand on the grid, if you have a particularly heavy load, for example, then there is an argument for that and if you are able to then basically provide that to power ‑‑ those kind of energy servers to other people, that's a whole separate revenue stream. There is a term called a controllable load resource for this kind of stuff. That's absolutely a thing that people do now. And it's ‑‑ if the data centres are probably particularly good kind of off takers for things like this and make it easier for them to get built, in many case it is saves you money as well rather than buying gas and help fund a war machine, for example.

SPEAKER: Blake. So thanks again for this, this is great. People like myself who are part of a sustainability Working Group within their company really appreciate having slides like this to use as kind of ammunition for this sort of thing.


.
On that note, we recently kind of rebooted the company and part of that was our investors actually insisted that we create a sustainability Working Group within the company. So this is a thing that can be done like change can happen, even within large organisations like ours, so thanks.



PETER HESSLER: Okay. Thank you, Chris.

CHRIS ADAMS: Well, folks, I'll be around for the rest of the day, but ‑‑ and I'm on these places too, if you want to e‑mail me or anything like that. Have a lovely day. I'll hand over to the next person, I suppose.


.
(Applause)

Next we have Carsten Strotmann who will be talking to us about fragmentation in DNS and protection against DNS cache poisoning.

CARSTEN STROTMANN: It's time to talk about DNS again.

So, I am talking about a study that my colleagues and me, which are Roland, Markus DeBrun, Anders Kolligan and me did in the last two years, and we investigated or we did some research into the question whether IP fragmentation is dangerous for DNS and it's a problem on the DNS as we know today. So we wanted to know is IP fragmentation a real threat in the Internet? Is it possible to have mitigations and how much do these mitigations cost.

First, what we are talking about.

So, it's possible and it is known for quite some time that attackers can bring an authoritative server to send out DNS answers fragmented, and that is done by lowering the path MTU by sending spoofed messages to an authoritative server and, later on, when a DNS query sense a query to that authoritative server, that server needs to fragment the response going back.

The attacker announced about that and then can spend spoofed secondary fragmented IP packets towards the DNS resolver which then happy combines the second and the first fragment.


.
Now, in DNS, everything that identifies a DNS response, and that is DNS without DNSSEC, everything that it identifies a DNS response is in the upper part of the DNS message which is then in the first fragment and not in the second.

However, some juicy information that attackers might want to spoof are in the second part, which might be, for example, the IP addresses and names of name servers in a DNS referral.


.
So, and that false response is then being sent back to the client and it also stored in a cache so that other clients will also get the wrong answer back.


.
So, first, we looked into how common is IP fragmentation for DNS responses? For that we looked 24 hours at the DNS responses that at a large European Internet server provider that has about 4 million home and business Internet users. And we have seen quite low numbers of fragmented responses, low numbers in percentage, but still high numbers in total numbers. So it was, for IPv4, 0.10%, and, for IPv6, 0.11% of all the DNS responses that we have seen were fragmented.

That's still for IPv6, 69 million responses. So, it's low in percentage because the Internet is big. It's still a high number of responses.


.
And a lot of these responses were caused by DNSSEC. Now, we can say, okay, if it is caused by DNSSEC, then we are good because DNSSEC prevents these kinds of spoofing and the resolver will figure out that the data is being spoofed and it will not forward that to the client. That is true if the DNS resolver is DNSSEC validating resolver, but unfortunately not all the resolvers in the world are DNSSEC validating. So still there is a danger for roughly 50% of all the resolvers in the Internet that are not validating resolvers.


.
Here this shows the amount of fragmentation we have seen over 24 hours. And there we see that it starts somewhere in the early morning and it goes then down in the evenings, which is a little bit strange, because the most traffic that is being seen at this ISP is in the evening times and this is the percentage of fragmentation we see, and the most fragmentation really happens in the early morning days, or hours, sorry..
So this might be because of caching that certain DNS responses that are prone to be fragmented are falling out of the caches during the night time and then being refreshed or refetched in the early morning.


.
Then we looked into which DNS servers are responsible for sending IP fragmented DNS responses, and we found that for IPv4, it's just 37 DNS servers that are responsible for 90% of all the fragmentation we have seen. And for IPv6, it was 133 name servers. So that is a low number if we compare that with the total number of responses we have seen.


.
And then, if that is ‑‑ if that is a low number, the question is: Maybe these are domains that we don't care about. These may be not popular domains and we looked into the domains which are being served by these name servers that we have seen sending fragmented responses. Then we found really some interesting domains in there which we can say they are slightly popular, like office.com or army.mil or something like that.

So, that was the first research we did, and then we looked at the authoritative side. We used the OpenINTEL platform, which is a large research platform that covers around 60% of the public Internet and we used that to send DNS queries to all of the domain names that are being queried from OpenINTEL and we looked into fragmentation on the responses, especially we looked for fragmentation on the AAAA record, the A record and the NS record responses. And again, in percentage numbers, we saw very low fragmentation, but then in total numbers, it's still high.


.
So, that was 0.047% for IPv4 of all the answers were fragmented and 0.096% responses on IPv6 were fragmented, so low numbers, but it's still in the millions if you count the packages.


.
We looked into the distribution of the size of the DNS datagrams being sent for response. This is the picture for IPv6. And as we see here, the large majority of all responses are below the Internet MTU and most responses are even below 1232, which is the size that we can say is guaranteed to never fragment in the normal Internet if we have the normal MTUs of IPv6 or IPv4.

And this is the picture for IPv4, which is very similar. And we looked into the size of the EDNS information that the authoritative servers advertised to the outside world, meaning that how large of responses are the authoritative servers willing to send back, and the good news here is that about 50% of all authoritative servers have already tweaked their configuration. Now, I have to say that this measurement has been done in early 2020; that was before the DNS flag day and that was before the open source and some of the commercial DNS vendors changed their settings on the EDNS buffer size.

So, still before that, were around 50% of all operators have changed their configuration to a more savoury configuration.


.
Next, if the EDNS buffer sites is setting low, it will create more TCP traffic because then responses that don't fit into 1232 byte or 1500 byte, whatever the configuration is, have to be queried over TCP again and then question is how much of the authoritative servers in the Internet support TCP today? And we used again the OpenINTEL system for that, and we tested out how many of the tariff servers really are supporting TCP.


.
And we found out that 90% of all the domains that are in the OpenINTEL dataset, they support TCP on all their DNS servers.


.
However, roughly 2.4% of the domains, they have no DNS servers supporting TCP, and that rules out TCP as a mitigation technology for the spoofing problem and the IP fragmentation problem because that would cut out these 2.4% of domains in the Internet.


.

Are these important domains? We looked into this and we found that at least during the measurement life.com, office.com and yahoo.com had some of their authoritative servers not responding to TCP. 1.5% of all the domains in the trunk 1 million list of the 1 million most popular Internet domains have no server with TCP support. And this is a plot that shows where these servers are. So we can say that in the, like, first ten thousand or so domains, the picture looks good. They have TCP support. But then, in the remaining part of the 1 million, it's a mixed bag; they are evenly distributed domains in there that don't support TCP.


.
The conclusion is that there are few but still popular domains that don't support DNS over TCP, so the usage of DNS over TCP for the fragmentation cannot be recommended.


.
Next we looked into which operating systems are vulnerable for ICMP spoofing. Remember, the attacker needed to lower the PathMTU towards the authoritative server in order to make fragmentation more likely, more easily happening. And we looked into what operating systems to see whether they are being able to be attacked by spoofed ICMP error messages.


.
For that, we created a test setup, and we sent a spoofed ICMP error messages and then we sent a DNS request to that authoritative server for resource record that we have planted there that was just below 1500 byte which is the Internet MTU and we looked whether that came back fragmented. And we found out that the Windows operating systems and the BSD operating systems are not vulnerable for ICMP ‑‑ spoofed ICMP error messages, it was not possible to lower the PathMTU for IPv4 there. But for older Linux kernels, they were vulnerable in such that we are able to lower the PathMTU to roughly 552 bytes, and that creates the possibility of IP fragmentation attacks there.


.
And these older Linux systems are, for example, Ubuntu 14.4 and 16.4, the LTS versions, and RedHat Enterprise Linux 6 and older, which we still see some in the Internet, and the question was how much of these older Linux systems are still being used in the Internet as DNS servers, and this is the point where our research goes a little bit away from the real scientific approach because we used a query for the version dot BIND name in the [something] class to figure out which operation systems are used. In this query we only see the BIND name servers and we only see the name servers where administrators don't change the version number, but, still, this is like the lower bound of operating systems that we see, because the popular Linux distributions, they encode the version number of the operating system in the version string used. So you get something back like BIND 9.11.EL6, or EL7 and that tells you that this is a BIND 9.11 running on a RedHat Enterprise Linux 7, and we could, with that, figure out how much of old Linux distributions are still used for authoritative DNS servers in the Internet and we found out that at least from the BIND name servers that we have seen, RedHat seems to be more popular than other Linux distributions. So we have seen 28.2% of all the servers that we have queried were running some kind of RedHat Linux and 11.5% were still on RedHat Enterprise 6, which is one of the vulnerable kernels in there. There was 14% EL7 and 0.2% of the new ‑‑ then new enterprise Linux 8. And there was less Ubuntu, there was 2.9% of Ubuntu servers, and roughly 1.5% which is combined Ubuntu 14.16 which were vulnerable could be found in the Internet.

So that is still some 15% of machines in the Internet, authoritative servers in the Internet that were potentially vulnerable for these kinds of attacks.


.
And that opens the question whether it is wise security from the security point of view to run long‑term supporter Linux systems in the Internet, because the security problem is not something that has been, like, published and has been patched. It's something that has been changed in the Linux kernels unrelated to the security problems, but still, the combination of running DNS software on all the kernels creates a security problem.


.
Then we looked into mitigations. So, what kind of mitigations are available, what is the impact of performance, and what is the current support in popular DNS server software.

Mitigations which are already implemented and widely distributed in server software, we arranged them higher than mitigations that have to be implemented first and then have to be rolled out in the Internet, because we know that can take years for a new feature in DNS software to be rolled out in the Internet.


.
In order to test this, we built a small scale model of the DNS part of the Internet, which we do, or we have done in the past, to do a proper scaling of a resolver infrastructures for large ISPs. So we know that the performance numbers we get from the testbed are comparable to the performance numbers we get in the real world because we have done these kinds of performance testing for large ISPs in the last ten years, and we then, we're able to compare the numbers we had in the testbed with the numbers in the real situation later.


.
For that, we built an array of authoritative servers that were hosting a root zone, top level domains, roughly the same number of top level domains that we have in the Internet and then second level domains and we created a mix of DNSSEC signed zone and non‑DNSSEC signed zones comparable to these in the Internet. We had a large DNS resolver in the middle where we tested the mitigations and then we had another array of clients which need to be much more ‑‑ much larger than the authoritative servers and the resolver to really saturate the resolver there to find out when does this stuff break.

These are the mitigations that we tested. We already ruled out TCP only DNS service, because there are too many DNS servers out there that don't support TCP but we wanted to know what is the performance impact if we run DNS just over TCP.

Then we were also interested in how about TLS, DNS over TLS DoT? TLS 1.3 being highly optimised. What is the gap between pure TCP and having TCP plus TLS in there.
.
Then we implemented something that we call opportunistic TCP. That was implemented together with the good guys at NLnet Labs and opportunistic TCP errors that we first try to resolve a query over TCP and, if we reach an authoritative server that doesn't support TCP, we do a fall back on UDP. With that, the idea was the 97% of servers that support TCP are protected against these kinds of attacks, and only the remind, the 2.7%, they are still open for attacks, which is a better situation than today where all the machines are possibly attackable.


.
Then we looked for using UDP only for the small responses, that is responses smaller than the regular MTU that we see in the Internet networks. We also tested what happens if we just throw away all fragmented packages, all fragmented answers coming back to a resolver on the firewall level. And if we just throw away smaller fragments that cannot or normally does not happen in Internet networks. So everything that is below, like, 1500 bytes, if we throw away that one.

And the last one was something that was suggested by Mac Andrews of IUC, that is signing all the DNS responses with a well‑known TC key, because that creates some random stuff, a signature, which is at the end of the DNS message which is always in the large segment and that is much harder to spoof and to guess from the attacker's point of view and that can also mitigate these kinds of attacks.


.
And here is the performance numbers that we found. And for running DNS over TCP, we found that we have a performance loss of 44%, if I see that correctly. And with DNS over TCP and then TLS on top, we see that TLS is quite optimised, it's just another 4% more loss. So minus 48%. But still minus 48 percent for DNS is not acceptable. We want to have our DNS quick and fast and lean.

Then the opportunistic TCP was even worse. That was because if DNS authoritative server didn't support TCP, we had to wait for ‑‑ we had to wait for a time‑out and then we would requery over UDP, which would result in a performance loss of 74% compared to classic DNS over UDP.


.
Then, on the positive side, if we use UDP just for the small responses, meaning more responses which normally don't fragment in the Internet networks, we had a slight performance increase, not much, roughly 5%, and that might be because DNS servers are the authoritative server and also the resolvers have less data to process, because authoritative servers, if they have a restriction on how much data they can send over UDP, they put less data in, for example, the additional section. And that is maybe less work for the servers resulting in this little speed‑up.


.
If we throw away fragmented responses on the firewall level, we also see a performance increase, just a little bit, 4.7%, but still good. And what we found out is if we throw away the data, then the popular DNS resolvers that we use to test in our testbed which were Unbound, BIND, PowerDNS and Microsoft DNS and not resolver, what they do is that they remember that a certain server is not reachable with the normal EDNS setting and they fall back to the classic DNS over UDP with an EDNS buffer site of 512 so a maximum answer size of 512, that results in even less data to process, hence the speed‑up.


.
And then the few TCP connections that resulted from that, didn't have a large impact on the performance here.


.
The dropping small fragments only, we had even a higher speed up of, 5.8%, and using TSIG, there was a 0.5% speed up, that was just a measurement flex show, it seems to be that if we use TSIG to secure DNS between the resolvers and the authoritative server, it's the same as the LAAS‑CNRS I have DNS over UDP so no speed up and no losses.


.
So our recommendations this test is to use DNS over UDP for the small responses only, meaning for responses that don't fragment, that are guaranteed to don't fragment. If you do that I can safely drop all the fragmented answers on your firewall level because there is no legitimate reason why there should be fragmented answers anyway if you have the setting. And we communicated that to the people who plant the EDNS flag day, and also that was a confirmation for what people in the DNS already knew quite some time, that it's a good idea to limit the DNS responses to 1232, or similar values, which was the recommendation on the DNS flag day, 2020.


.
So conclusions here:
.
It's possible to attack DNS content with fragmentation with IP fragmentation.

The amount of natural fragmentation is minimal but still significant.
.
Even popular domains are vulnerable, it's not just the home pages, it's large corporations that are vulnerable. And fragmentation should be avoided and can be avoided. There is mitigation already built in in every DNS server out there, it's a matter of creating the correct configuration or updating to the latest version which has the new configuration ‑‑ new defaults already built in.


.
And of course, these mitigations that we looked into, just working against the effects and not limiting the course, meaning you should really deploy DNSSEC because if you have DNSSEC deployed on your authoritative zones and also on your DNS resolvers, this can not happen anyway, and you don't have to really fear any of this spoofing attacks.

And that concludes my little presentation here and I am happy to take feedback and questions.


.
(Applause)

PETER HESSLER: So just as a reminder, we do have a text Q&A if you would like to submit your questions via text, also very useful for those of our remote participants, you can use your phones to add yourself to the virtual queue, as currently nobody has done, or you can come up to the microphone as you are. So, first question.

SPEAKER: I was about to add myself to the queue, but thank you. Jelte Jansen, SIDN, I have ‑‑ first let me thank you for this comprehensive and extensive work, it's really great stuff. I have a question about one of the first slides, the vulnerable resolvers.
.
You mentioned that it's much much worse for DNSSEC signed zones, but would resolvers that do not verify DNSSEC also not ask for DNSSEC records in their responses or are there so many that do ask for them but don't validate?

CARSTEN STROTMANN: My knowledge, and please correct me, everyone here, if I'm wrong, is that the popular DNS resolvers, even though DNS validation is not enabled, they still ask for the DNSSEC data by setting the DNS K flag, so they get the DNSSEC data, the signatures, everything, but then they don't do anything with that. And that creates this vulnerability for them.

SPEAKER: That's how many can be configured. I'm not sure how many are, so that's why I ask. So enabling DNSSEC is always the answer? Okay. Thank you.

PETER HESSLER: Thank you. Next, Jan will be reading a question from the text Q&A.

JAN ZORZ: There is a question from Kurt ‑‑ sorry, I can't enter the queue, I have no means to enter the queue. Sorry, Kurt Kayser, private interest: Would DNS fragments filtering break anything? My hope would be that it does not impair DNS operations, hence can be endorsed?

CARSTEN STROTMANN: To ‑‑ it depends really on the DNS resolver software, whether it breaks something or not. We have tested with the popular Open Source resolvers and also with the Microsoft resolver and there it worked in such that the product, the DNS resolver products, they always had a quick way to fall back, for example, to use a DNS query without EDNS, so falling back to the traditional 1983 DNS, which has the limit of 512 bytes.
.
There might be products out there that fail. We have operated this thing that we have thrown away fragments in the firewall on a large ‑‑ on a large ISP in Europe for three years now, and we had not any negative effects by doing so. So, from my point of view, it's safe to throw away the fragments.

SPEAKER: Internet Systems Consortium, DNS developer. First, thank you for doing the analysis and I have to say it's maybe too detailed because we should be spending energy on deploying DNSSEC not on analysing, you know, the stuff we know it's insecure from '85 till today. So please deploy DNSSEC to your zones and don't spend more time on measures, because even if you somehow duct tape this fragmenting problem, it will still be, you know, open to other types of attacks because it's insecure by design, so please deploy DNSSEC.

CARSTEN STROTMANN: Yes. I second that.
.
(Applause)

PETER HESSLER: Okay. I see there are no more questions.

CARSTEN STROTMANN: Also, just one remark.
.
There is a comprehensive report about 150 pages on this research that will be published by the German cybersecurity office, the BSI, so if you are interested in more details, either watch the web page of BSI in May, as it should be available there soon, or if you cannot find it there, write me an e‑mail, you'll find me here or on the Internet, my e‑mail address, and I will give you the paper or the link to the paper as soon as it's published.

SPEAKER: May I have another one? Just another clarification question about the performance measurements. This is the performance of resolution time, right, not performance on the authoritative side.

CARSTEN STROTMANN: Yes, performance of named resolution side named from the client.

SPEAKER: Because I would be interested in what that would do to big zones servers like .nl.

CARSTEN STROTMANN: Yes, good research question for next time.

SPEAKER: Let's talk later.

PETER HESSLER: Okay. Thank you very much.
.
(Applause)
.
As Dimitry is coming up to give his presentation, I'd like to remind you that ‑‑

DIMITRY KOHMANYUK: Hi, guys, it was a bit dramatic title, I tried to change things a bit and I'm glad to see you all, sorry my eyes can't catch both sides of the room so I'll pretend that I'm looking at either.

We have done ‑‑ I have just been looking at my own notes and let me put them here.

So, as I said, we have done this before and I'm glad to see you all. I just tried to give you some ideas how you can operate things when, let's say, unforeseen circumstances happen.

So, just the background. We are running the topology of the Ukraine, I was at a few meetings before and maybe you have seen me before, we have a top level domain operator in TTL Ukraine, we are a small company. We have about 20 staff and one of them, I am the CTO, I think my title is director of strategy.

We had done some things that were kind of preparing for us what was going on. We had a DNS service attack the week before and something that went down was the government domain. It was kind of unfortunate that things like that happen when you are not preparing for them. We have learned a few things that proved critical, one of them is having the team communication not using their own infrastructure, we switched to signal chats and I was using less and less e‑mail and more and more online tools like Google Docs and hosted tools, including, yeah, the Google Docs, I mentioned them. And that's something that is kind of a strange feeling that you are used to have your own infrastructure and rely on your own but when it's under attack and you don't have this separate thing like the Cloud we should have had foresight, it's good to have other things and it's takes away the thinking of fixing your own tools of fixing the services that are provided for the public.


.
What happened next is more how to describe, I was abroad when the active, as we call, military action, whatever it is, attack started. I managed to wake up pretty early in the day without knowing what's going on, and I was really in panic, like I guess most of our team, I didn't know what to do. The news was pretty grim, and I tried to reach out to everybody in the company to make sure they are okay. I kind of understood that the worse can happen, we may lose some of our staff, like really, and some of our infrastructure and I had no way to predict what's going to be next. The expectation of the world was that Ukraine was going to surrender in 72 hours and just give up.

Well, obviously the military had their own ideas and I had made a list of things, my main tool was every note, was doing my work from my MacBook and phone, I was on a trip without my home setup. I decided this 80:20 rule was going to be we are going to save the most important things and the rest is going to disappear.


.
I created the plan to migrate most of our infrastructure abroad in 72 hours. I think we overran the time a bit, but I would say, by the end of the Sunday, which was the fourth day, we have done that.

So I put these four things that I think as every manager ‑ I can call myself a manager now ‑ should keep in mind.

You have to prioritise your stuff. And also, maybe customers, but the whole thing we have here is not existing by itself. You know, the goal of the society is to provide for society. If you don't care for your own people, then it's meaningless. The data, of course, is important, like a database is more important than the client, right. And the data means also your financial data, your production, whatever you call it, but you have to kind of structure things that you are preparing to lose things on the bottom of the list, right. The money is something that you may lose, of course, I'm talking income, resources, you may overspend, you may over‑allocate. You kind of do speculative execution. Like you are doing projects which you are aware would partially fail, like, for example, I reached to maybe 20 potential partners to help us, and I ended up surviving the three main ones. It was a DDoS protection services, we have used NetNod DNS and we are still using them, we later added 6connect, I am grateful to folks for those present here. CloudFlare as well. We used cz.nic as our many hosting provider then we had a not to be named company from Ukraine that was helping us primarily before and they had services abroad.

We had assembled ‑‑ well, that was done in other presentations so I'm not going to tell you how to run your own TLD. If you are not, it's not your business, but we had certain set of things we have to take care of and those are listed here. It was interesting to think of which data centres, or which servers can go down as the military events of Russia was progressing. Well, it was not that complicated, but, as I said, we are planning to move things abroad but moving things means that you have to establish the account, or create a new server, create the virtual machines, move staff and some of your people are actually in the car moving away from the zone where the active shooting happened, one of our developers was out of connectivity for 24 hours, she was calling me every few hours, I was calling her and making sure she is okay. Then she is like okay, I am going to shut down the production and quickly migrate the primary and then nothing happens for the next five hours. I am like, okay, we have these secondary servers, okay.


.
So it was kind of messy, difficult, stressful. I am still recovering. It was fine when it was partially done, then a lot of loose ends.

Some decisions we made early on were kind of reinforced that what we do ourselves, what we don't do ourselves, I maybe haven't mentioned it before, but we are very BSD‑centric. I made the sacrifice to migrate to Linux for clusters, I'm still regretting that but I think it was a good decision to have one main platform to, I guess, simplifying infrastructure and making sure you have less components more standardising is very helpful.


.
Some things obviously are relying on the vendors, so, we are not, for example, using Amazon as much, but, when we do, we must use it as Amazon does, for example, I can complained a lot about the idea of Amazon virtual not working and importing your own network blocks into Amazon is really, I wouldn't say a nightmare, but it's way more difficult. I need to talk to people who designed VPC and tell them they're wrong. No v6 by default unless you enable it. A lot of other v4 dependencies.

These are not probably the costs but the word "costs" means not just money, the word "costs" means at the end of your allocater. You are thinking of time versus people, goals versus resources, what are you going to do?
.
One lesson that I want to emphasise from here that you have to get contacts with people that you are working deeply before. Okay, you probably can get this off Amazon as your close friend, but my contacts with, say, IANA team and CloudFlare management team was helpful. Not just my company, contacts, other people in my team's contacts, people who I knew helped me to reach people I didn't know. It was a lot of free help we got and I already mentioned those companies so I'm going to repeat their names later.


.
It's kind of too late when you are having this situation to establish new friendships, you know, just setting up accounts which make take a couple of days, you have to do things within a 24‑hour cycle. You have daily loop and you have two or three stand‑up meetings per day.


.
Those are people are mostly relying on, or were relying on. And I already mentioned some of them. The packet clearing house were their old partner but they also helped us with to do some other things, which I'm not going to name here. Nothing secret here, I just want to keep this talk short.

A lot of companies that we offered to pay said don't even think about it. Some of the companies we have signed memorandas of understanding with, that includes cz.nic and that includes ‑‑ did ‑‑ well, let's go, right. Sorry, I kind of get off track.

That was having things in writing, not as in paper, is somehow more important than just kind of having this warm fuzzy people of negotiation and it goes of kind of contrary to the way of uses instant manager. So as much as I recommend to not rely on, well, word of mouth and everything in writing, sometimes you have to kind of trust things to be done correctly before they are done.

I guess there is no right way to do it but I tried to cover up for the missing gaps afterwards.

Some few bits here like we were updating the root zone for UA and the people who are doing those checks run into their own issues, I have to escalate a greater ticket, I have to message Kim Davis to happen to know to push things forward on Sunday, and that was actually done. That's actually a thing that is bad working with people in the opposite time zone right. US versus Europe, or let's say, Asia versus Europe or US versus Asia, so yeah, maybe that's a good tip, try to concentrate on companies that serve in your own time zone or close to that. Most companies do that but not everyone.


.
I would like to also express my extra gratitude to people in this community and of course other communities. The Minister of Foreign Affairs of Sweden, I hope I say this correctly, the Global NOG Alliance being partially present here, and other folks who gave me private advice and I am probably going to write a longer version of this talk is kind of the document to be published in our company site.

The greetings go to the armed forces who actually protect my country, and, well, make any possible for my own team to survive.


.
I can take some questions, and maybe we got time? About ten minutes, am I correct?

PETER HESSLER: We have plenty of time. So if there are any questions for Dmitry?

DIMITRY KOHMANYUK: I am sorry for stealing the mic off that table.

(Applause)

PETER HESSLER: It looks like we have one person coming to the mic. If other people have questions ‑‑

SPEAKER: Hi, Tom Hill from British Telecom. I don't so much have a question as I think I just want to highlight how incredible this has been, and the work that you have done is probably one the most amazing examples of Internet architectural resilience that we have ever seen. So, you have already said thank you to everyone, but thank you to you in particular.
.
(Applause)

DIMITRY KOHMANYUK: I appreciate your words. I should mention we aren't an ISP but we did rely on the Ukrainian ISPs and yes, those guys were really resilient on their own. And I hope they would come some day and speak about their role and their actions.

JIM REID: Speaking for myself. Dmitry, I think you and your colleagues have done a wonderful job and I cannot imagine the stresses and hassles you have had to cope with over the last few weeks and it's very impressive the way you have handled this, so well done!!
.
One thing that I would like to move on from this a little bit is that even though this situation is almost ‑‑ is unprecedented and it's unthinkable what's going on in Ukraine right now, it does highlight that this business of running core Internet infrastructure is now becoming much more complicated, it's becoming much more visible, we all need to think much more about business continuity, disaster recovery and, even if the circumstances are never ever going to be as extreme as what's happening in the Ukraine right now, I think everything in the Internet exchange, Anycast instances, blah, blah, blah, need to start thinking about what happens if something really bad happens and the really bad things happening isn't some idiot with a JCB digging up a bit of cable or a power supply going out. We have to think about things as you mentioned, people in particular, how do we keep them safe, what happens if one of those key people in your organisations becomes unavailable for whatever reason? These things need to be thought about and we need to think about having procedures and mechanisms, and also drills, to actually, you know, test these things out in practice. What do you do, for example, if you can't communicate with your organisation because signals gone away? What do you do if you can't use voice‑over IP, or whatever, these things need to be looked at and you need to think about backup strategies, defences to these things in the future and that applies to everybody, not just what's going on in the Ukraine right now.

But, again, thanks for all you have been doing, Dmitry, it's fantastic work.

DIMITRY KOHMANYUK: Thank you. I'd like to express gratitude to the folks in the RIPE community and many people who gave me their own personal advice and kind of been mentoring me and continue to do that.

You kind of drill for the war but you can drill for part of infrastructure going down.

JAN ZORZ: I would like to relay some comments from the online thing, and Kurt Kayser is saying that: "The functioning of the Internet for the Ukraine is essential for not giving up and resisting the aggression, great work."

Brett Carr is saying: "Great job from everyone at UA, an example to us all." Thank you.

DIMITRY KOHMANYUK: Thanks.

(Applause)

PETER HESSLER: Okay. Thank you very much.

DIMITRY KOHMANYUK: And thank you for your PC service. I know you are ending this meeting as a PC member, so my thanks to you and congratulations for the work.

PETER HESSLER: That's a fantastic way to go to, that we are having RIPE PC elections. This will be my last meeting on the RIPE PC. I'll term out, we guarantee there will be at least one new member of the PC. If you are interested in volunteering yourself or a friend, please check out the RIPE 84 website. We have a link with the explanation of what the responsibilities would be and how to apply.

I believe the deadline is 15:30, so the end of the first afternoon session.

And, with that, I believe we are done with this session a little bit early, so, go and enjoy some lunch. See you all later.

(Lunch break)

LIVE CAPTIONING BY
MARY McKEON, RMR, CRR, CBC
DUBLIN, IRELAND.