RIPE 84

Daily Archives

Connect Working Group

Wednesday, 18 May 2022

At 12 p.m.:

REMCO VAN MOOK: Good morning, everyone. Thank you for choosing connect at the RIPE meeting, second most favourite Working Group. If you could all move out of the aisles, we will get started as soon as we can. We have a scribe who is going to take the minutes. Yes, this is a hybrid meeting so I am going to have to do this twice. This room has four emergency exits, two in the back and two in the front and if you are joining this meeting online, there is a single emergency exit which is the big red cross on the top right of your screen. Good. Now, we have that out of the way.

We have a very packed agenda so I suggest that we just get started. My dear colleague Will has provided us with the agenda slide, I haven't seen it. Do I want to see it, Will? Oh, dear lord. Right. We are going to do this online, all of the presenters will be in the room.

WILL VAN GULIK: That's the way we do things here.

MR. O'MOORE: The other Chairs of this session is Will, who is right here, and Florence, who is joining us online because unfortunately she couldn't make it to Berlin. Florence, we miss you and hope to see you again soon. And shell be mostly moderating and looking at the online stuff. We have a scribe, we have minutes that I'm sure you have all read very carefully and you have no comments about. With that, I am going to approve the minutes of the last session and thank the RIPE NCC for compiling them. And then let's see what else do we have?

WILL VAN GULIK: I think for the housekeeping we are done but I think we can go ahead with the first presenter, Pascal did some interesting stuff and he up grated his network and he is going to tell us about that. Pascal, the floor is yours.

PASCAL GLOOR: No how do I get my slides, change my slides? How is that working, just tell me?

"Attention, unable to attach media". This is starting well. And the agenda is tight. How are you doing? Still nothing. So I have a 15‑minute slot and yesterday Will told me, well, I will have to decrease it to 12 minutes because we have a tight agenda and I was like I have got 40‑plus slides. But the difference between the last RIPE meeting physical and now, is TikTok happened in the middle so the attention span went down from 15 minutes to 10 seconds, which fits my presentation. If it will come at some point. Okay, well ‑‑ oh, it's starting. But still, I got two pairs of socks, one is sized small and one is sized bigger, whatever you want, how many gigs?

SPEAKER: 25.

PASCAL GLOOR: It's being ‑‑ oh, yeah. Well, thank you, let's start. So, agenda is tight, let's go on. So, I'm working for Init7 for about two years now, they are the so‑called NERD ISP in Switzerland, we have about 50 employees and our main product is called fibre 7, which is about €60 a month. And fibre 7 came in 2014, we are basically renting the fibres from Swiss come but also other providers who do layer 1 and our service a gigabit symmetrical which we started eight years ago and that was already crazy at that time, we do point‑to‑point and have free choice of router and provide /8 to every customer fixed and public v4.

This is why we are called the nerd ISP, so far from home, it's not so bad.

Exactly. So, that was the infrastructure we built eight years ago, those are catalyst €4,500 and we have them, gigabit line cards, 48 ports, by directional optics and we get these break out cables which have 24 fibres each time. The market evolved, the others were these guys are doing gigabit we need to do some something, they start deploying G‑PON and later on the other major started also with X GS PON but that's 10 gig and that's PON and I am going to go quickly over PON if you don't know what it is. This is Passive Optical Network which means you are going to share a port, passively over multiple users. So that's not really point‑to‑point. Those are different standards, usually you have different split ratio, you can go up to 128, most of them are doing 32. Those are the different speeds, there are like 25 G‑PON exists, I think there is one set up 50 G‑PON, 100g PON, I think in standardisation. And so this is something I've seen, with my own eyes, I can't show you a picture because I'm not allowed to, I can't just pick picture of your competition, but that's basically the oversubscription that I have seen, so out of 25,000 times. And they come and say, well, you can have 10 gig, yes, divide by 25,000 thousand times.

Anyway, we are not really happy with PON technology, we just like to provide a service that is fair.

So, our action was like, one‑and‑a‑half years ago, we were need to do, we need to do 10 gig, so, we need a new access platform, our hardware was getting old so we went on, our goal was 10 gig port, 40 or 100 gig uplinks, 24 or 48 ports, MPLS for business service and one box to rule them all in the sense it can play the role as a router but also as a switch. So we don't need different hardware in a POP.

And then we realised that during the RFP 3 of the 4 vendors were offering switches with SFP 28 and let's do that and Freddie my boss was do optics exist? Are they payable. They are. So we selected that hardware. The one on the top has 48 port for customers, four uplink ports, 100g and then we also for bigger location where we have to interconnect a lot of switch we use the 32 times 100 gig switch to interconnect all the switches but that is great because they can play the role as a switch and there's a router so we don't need, in very small location we will maybe have only one switch but also play the role as a router.

So the scope that 500 switches, 130 locations, initial timeline 24 month, and the expectation on upgrades was like 10, 20% of customers upgrading to 10G and a handful customer getting 25 G because it's getting complicated in home network 25 G. It is.

We started with silent roll‑out because when we wanted to be able to at least in one city be able to actually upgrade when we announce it, so we did roll out four POPs in our own city, which is near Zurich, and well we started unpacking and stuff and then I broke my finger, but it doesn't matter, we continued.

More pictures. More trash to get rid of. Then my hand got better, exactly. That was like ‑‑ this is at home, this is configuring so many switches at the same time. And this was one of the first POP we upgraded where we have the 32 ports at the top where we connect all the switches in redundant matter and we have then all the access ports.

So we did the announcement and Freddie tweeted that, which is in German, but basically it says well, how does work? We have something to announce. And quite quickly, someone found out that if you apply it's actually quite perfect over seven years. And he guessed it correctly, that we went from 1 to 25G. So we started also bit advertising, which is Swiss/German, can it be a little bit more please? And we offer 25G. Hour competition has still the fastest Internet in the world and that's like a year ago, and they still do it. And the day we announced, that was really not planned, we had no idea about that, but the day we announced, Nokia and Proximus announced which was the faster in the world, that's PON and divided by 32.

Conclusion: At least we solved the bandwidth problem. Challenges. Well, speed test server, so we bought this card and took a server and plugged it in the core with 100 gig, I expected IPER3 to work correctly which it does but I was surprised that the speed test server also works and does deliver 25 G.

Payable CPE and affordable CPE, that's a bit difficult. There are CPEs that have SFP 28 but they can't handle actually the bandwidth, they just have the interface. Production bottleneck, we have a lot of discussion with the supplier, I guess you are all aware of delivery issues but he wasn't that bad at the end.

So, automate as much as you can. I took an old switch and I put a server in the back which zero touch provisioning, I wrote a lot of script taking the old figures to migrate them to the new configures, the more you can automate, the better, because it is a lot of work. Outsource whatever you can, but outsource matching the skill set, like we hire people just for unpacking, that saved a lot of time. Unpacking, putting the right amounts, just doing that, the guys were working five days. Preparing the switches with zero touch provisioning, installing the hardware, doing the migration, just take the people who have the right skill set and break down your migrations into these steps where you can have then different skill set, you don't need an engineer for all the step. Customers can also help and your kids, that's my son actually, programming optics. And labelling switches. But someone can also help you in debugging and writing blogs and sometimes even physically help you, coming back to that.

This is Michael Stapelberg, customers of ours and did a lot of work analysing stuff, made a custom PC which works with 25 G and it's a really interesting blog, because 25 G is really a challenge even on the PC I side to have the right bandwidth. That's Will, our only customer here, and on the other side, on the lower part is Freddie and that's Oriana, and they all helped us do the migration. We offered them a year free Internet and a nice dinner.

But yeah, your customer can help you, you know them, I wouldn't take an unknown person in a cellar in the middle of the night but your friends or community friends can help you.

Don't underestimate the nerdiness of your customer, they need stuff, supply chain issues, that's the other topic. And don't underestimate your over time because those projects are long and you think like you have to do over time but it's not going to work over two years, you know. It's just going to accumulate and it's really not good for your health, so be careful with that one.

That's our map. So we are quite far in the project. Everything that is red is done. The black ones are still to do and we are two months from the end of the migration, it was planned 24 months but we are going to be at 18, 19 months.

Some more conclusion. Well, that's my final conclusion, 25 G. I am hearing so many major operators saying, we will we have bandwidth issues, there's just too much and we can't do more. Well don't be afraid of the bandwidth. The volume doesn't change, people down download more, they download faster but the overall volume doesn't change. And what we do, we don't sell the bandwidth, that's an important point which I forgot to mention in the beginning, we have this principle called max fix, the price is showed at the beginning, it's still the same price. You can choose if you want to have 1 gig, 10 or 25G, we are not selling the bandwidth, we are selling the service, the bandwidth is your choice, it's the same service, the optics are more expensive but that's basically it. And we have an unexpected large amount of 25 G upgrades. At the beginning I said we are going to have a handful because 25 G is crazy, right? We have over 100 now. And that was a bit unexpected.

Questions? I have done it. I am still green at 26 seconds left.

WILL VAN GULIK: Thank you very much, Pascal.

PASCAL GLOOR: I did it.

(Applause).

WILL VAN GULIK: I don't see that we have any question online ‑‑ oh, yes, we have got one live. Gordon.

SPEAKER: Gordon.

Thank you for this refreshing and fun take on fibre ISP deployment. My question is: The equipment you present is not really the industrial temperature hardened switch you would expect from an access switch so does this choice of equipment limit your deployments or do you have, like, a lot of actively cool locations?

PASCAL GLOOR: They are not cool. Most of the are in Swiss com buildings and 15 plus years and they have a limit of 35 degrees, but it's quite stable so we really haven't had any temperature issue but you are right, those are enterprise grade devices, which has a major advantage; we don't have port licences.

Gordon: Thank you.

WILL VAN GULIK: For the sake of information I will close the queue because we are really tight.

SPEAKER: Working for Delta fibre in the Netherlands. We are doing some deployments of only 8 gigabit and I see how hard it is. What kind of CPE do your customers use because I think that's the most challenging part, to get ‑‑ do you recommend or do they choose themselves and what kind of vendors?

PASCAL GLOOR: For 10G it's not that hard, you can take a micro tick, that will work, affordable for a private person. The 25 G is an issue, there is no actual CPE you can buy that will actually do the bandwidth, as I said there are micro tick that have 25 G ports but they just won't follow with the forwarding of the traffic; if you are lucky, you are going to get 10, 12 G over it, so the only possible way is like I did at home is a PC I card in your server and that works well here. I can do 25 G at home, it really works, but even then, you have to be careful in which PC slot you put your card because you still need the bandwidth on the PC level it's high bandwidth for home use. It's a challenge.

SPEAKER: It's mainly the home built line us solution they use ‑‑

PASCAL GLOOR: We do sell CPEs, are transparent regarding the performance of CPEs but most of our user going 25 G they build something themselves because they know and as I said the blog explains how he set up with mini ‑‑ small set‑up.

SPEAKER: Ben, BGP tools. My question is, I just looked at your peeringDB page and all of your ports are 10 gig why you are giving customers 25G access. How do you hand he will that in the fact that a customer can theoretically blow two of your ports in one go?

PASCAL GLOOR: That's an interesting part, it's not an issue. Where do you download 25 G from?

SPEAKER: The customer could max out two of your ports.

PASCAL GLOOR: They could, it doesn't happen, that's it. That's why I am saying don't be afraid of the bandwidth.

WILL VAN GULIK: Okay, thank you very much, Pascal.

(Applause)

PASCAL GLOOR: Thank you, Will, for the work.

WILL VAN GULIK: My pleasure. I have got 25G at home now. Now on to our next presenter, Bijal, you know the drill, do you want a mic or do you want ‑‑

BIJAL SANGHANI: Hello, and I am going to give you a quick Euro‑IX update, it's great to be back in person, by the way. So let's go. So for those of you who who don't know what Euro‑IX is, we are a membership association for internet exchange points. These are our lovely members, a lot of them are in the room here today, so hopefully I will get to see you all and say hello personally later on. I want to thank the IXP members who are our members. We also have patrons, they are typically organisations that work closely with internet exchange points, so, for example, data centres, inventors, so any of our patrons in the room, a big shout out to you guys as well and we will see you around and thank you for the continued support.

So, what do we do? We have two events a year, well first time we are back to having events this year and our first one is going to be in Finland next month so we are looking forward to getting the IXP community together again then. We have a number of workshops and obviously while we were doing things virtually we did a number virtually. If you are interested in seeing any of those there are technical ones, commercial‑related ones but also others which talk about the community and things that would help and benefit the community and grow the community. So if you are interested in them you can have a look at those online.

We have other services like we do a number of reports, we have an IXP report, a traffic report, and also we do benchmarking, that of course is just for the membership, but there is a benchmarking report that goes out once a year as well. If you are interested to hear the news from internet exchange points, you can subscribe to our newsletter and members of course are always welcome to join the mailing list where they can find out what's going on there as well. We run a number of tools, the IXP database which is what Leo is going to be talking about shortly, the Peering Toolbox which I am going to give a brief explanation about. We have fellowship and mentor IX programmes, we did some community work on root server large BGP communities and we are also working on a new IXP film. Some of you may remember 15 years ago there was a film, The Internet Revealed, explaining what an IXP is and how it works so we decided to do a new version of that.

And typically, during this session I give a quick update on Euro‑IX but talk about some of the news from within the membership. So first up here, we have DE‑CIX, the news from DE‑CIX for this meeting is the introduction of EVPN which they are planning to roll out in Q3. The IX API which other internet exchanges are involved with are working on a specification for version 3 and that's going to be focusing on monitoring and statistics. DE‑CIX again have been busy and are setting up new internet exchanges in Acaba IX and also watch out for Iraq IX. DE‑CIX Phoenix is really for service since March and you can speak to the DE‑CIX team there. And DE‑CIX also, their Frankfurt IX peaked at 11 terabits so congratulations on traffic peak here and DE‑CIX New York has topped 1 gig of peak traffic.

News from inter LAN, inter LAN currently has 125 ASNs connected, their daily peak traffic is an average of 380 gigs. They have blackholing solution which was implemented in Q1, and they have completed their migration from Cisco to Arista in all their POPs. And they continue to commit to community projects like IXP manager, peeringDB and Global NOG Alliance. So thank you for your support there, inter LAN.

LU‑CIX, to improve their service and to aid and prevent blackhole situations it recommends to use BFD on peering sessions on the root servers and they have enabled it on all of those sessions. I see Michael there. Anybody interested in this topic at the next Euro‑IX meeting, which I mentioned earlier, which is in tempera next month, we are running a workshop on BFD on the Sunday and if anyone is interested in joining that, please come and speak to me, where LU‑CIX is presenting their implementation and their experiences of doing that.

Last but not least, on the IXP updates, INEX are celebrating 25 years of their internet exchange, so first of all congratulations to INEX, and they have released a photo competition and this is for everybody in the peering community, you can access it through that URL and the entries are open until the 1st September. It's really a community, a bit of fun, you don't need to be an expert photographer, and there are some really good prizes. The key is, celebrating light, but of course light can mean lots of different things to different people so there is some examples of some of the photos that have been submitted there.

The Peering Toolbox. So this is one of the Euro‑IX projects. It's a new tool for the community, it's something that we are currently working on. And it's a community‑focused project, the organisations that are currently involved are LINX, NAPAfrica and Kentik and HEAnet, to provide a learning structure and best practice, so for new people coming into the industry or learning about peering you can come to a RIPE meeting and hear these presentations but the question is, where do you start and what do you need to know, where do you go? Things like that. So the idea of the peering toolbox is to provide all that information in a structured format so you can look at it and figure out where you are, what's useful to you and use that as a tool. And that will act as a reference and guide. We are not planning on writing all the content, we understand and know there is already a lot of good content out there, so what we are doing is reaching out to people who have the content and that want to share it and we can publish it on our site as part of the structure so that it helps people understand what they need to do when they start off in the industry.

And now, I am going to hand over to Leo who is going to talk about the IXP database project.

LEO VEGODA: IXP database is a database that provides automated information about IXPs and it's provided by the IXPs. And this is an example of the kind of information that is currently shown in the current site. IXPs by connected AS numbers, new IXPs in the database and so on. This was really good when the IXP database was much smaller but the IXP is starting to outgrow this user interface. We have been working on some stuff because we have tools in the IXP database current site which are the tools that are available, but it would be great to have a wider range of tools and to support people who aren't able to use the API that is available to the IXP database themselves because they don't have a tools team at their organisation or they don't have the technical skills themselves, so we have been working on a new presentation platform, you can see that here we have well structured data sets which have metadata that are available, describing each of the fields in the data set. Here, we have an example of a tabular view of the data, you have got things on the left you can click to quickly filter, you can type in the box and you can go and do specific filters. You can also click on tabs and you can go and see charts which you can adjust dynamically. You can go and incorporate data from other databases, in this example we have taken some data from peeringDB, which has geolocation coordinates for interconnection facilities and we have matched it up with IXP switches that are in those facilities. You click on the pin tab, you get a little flag with information about the IXP. It's all highly configureable, relatively easily. It's not so much that we have got to get developers to go and do stuff, we need a bit of user feedback. So at this point, we are about ready to go and turn on the pipeline of data that will go to the new platform. We need some people who would volunteer to give us a little bit of user feedback behind the scenes before we fully open it up, and so my request is, if there are people in this room who would be interested in taking a look behind the curtain and help with a little bit of feedback that will get us ready for new platform launch, please send me an e‑mail, it's Leo at Euro‑IX .net and I can give you that access so that you can help us out.

BIJAL SANGHANI: Thank you, finally you can find other information about the IXP database, the JSON scheme or the API on the website over there and lastly, I just want to say a big thank you to the sponsors, that is APNIC foundation, APNIC and LACNIC, and if you would like to see more work in this project if you are interested in sponsoring. And that's it from me. Thank you.

(Applause)

REMCO VAN MOOK: All right. Any questions for Bijal? You know the drill, join the queue online and stand in front of a microphone. That's how it works, I don't make the rules, sorry. I do, actually. Never mind. No questions? Thank you, Bijal.

(Applause)

Next up is Florian from Arista.

FLORIAN HIBLER: I am part of the systems engineering teams in Germany and I wanted to give you a brief outlook what's up and new with 100 gig and 400 gig and form factors.

Let's start right into it. You have probably have seen or a lot of you guys are probably using break out optics nowadays with 400 gig to 100 gig so what is the most common problem fat‑fingering, somebody called up with DSN connector which is literally four individual fibres into one single optic, with the idea that you can easily pull out a single link at a given time and this happens with this, you can see them there, this little black pull taps so even for probably a clumsy data centre technician, it is still possible to pluck or remove any given link. So, that's one thing.

Another thing which is pretty new to this ‑‑ to the whole industry like this is the so‑called SFP DD or DS FP. The standards are still a bit baffling. What is new here? Pretty much it's 100 gig plugable in an SFP form factor so we have 50 gig on the electrical side and we have already seen optics like SFP DD LR out there in the wild. Why are the two standards battling? We have DS FP which is mainly built around Asia and China and the other has more traction in the rest of the world. I think this will still kind of ‑‑ well, we will see where it goes, but if we talk about the RIPE region here, we are mostly talking about SFP DD so only one variant would be relevant for you.

What does it actually stand for? SFP DD stands for SFP double density, we know this already, and the difference here is, they are actually built differently on the electrical side as well so the SFP DD introduces a new row with additional connectors, while the DS FP is repurposes some lower speed pins.

In addition to that, you can imagine if you buy an SFP DD system you can't plug in an DS FP or vice versa so they are not compatible with each other, they are backwards compatible. So in either port you can still plug in, 10, 25 or regular gig 50 ‑‑ you see more in data centre than carrier world. Vice versa then, 100 gig SFP DD cannot be used in regular SFP port. And overall, a little bit of an outlook, what's coming up after 400 gig? The world is already talking about 800 gig, that's nothing new but how could an optic look like and transition path look like? Here we are looking at an OS P optic in this case, and the idea is the first 800 up takes to be two times 400 gig so literally as you can see on the left side, having two LC connectors on one OS FP optic because it provides the space for it and just plug them in vertically so you get two times out of one 800 gig optic.

The same for data centre purposes with the optic on the right side. So just to give you an idea what it could be in theory, so you could have something like an 800 gig optic in your core or in your access switches, they are fully compatible with each other on the optical side so no problem there at all.

And to close my brief lightning talk about the optics, a little bit of an outlook. What is even coming further down the road and what's actually going on with OS FP and ‑‑ how we see it from Arista and customer base. I am a big fan of OS FP, and for 400 gig we are seeing by now approximately a 50/50 split in the market while still the majority of the OS FP is deployed in hyperscaleers, so with the DD in the backwards compatibility this is what you usually see in the open router market. For 800 gig which expect the OS FP volume being shipped exceeds the QSFP DD, there are various reasons for that, that's heat ‑‑ MD BF, better cooling and power efficiency and longer reach optics. Thermal advantages will become more critical at 800 gig and beyond. So, if are thinking about deploying 400 gig network nowadays, always think if you want to be backwards compatible to 100 gig or probably have an easier upgrade path to 800 and beyond with the OS FP because that might also mean you don't need to change any fibre cabling.

Looking even a bit further, we are already thinking about 1.6 terabit and we see unprecedented interest from the industry there. If you look at the link below on the website which puts you to the OS FP MSA, you can see that by now every major vendor in the world, all the optics vendors have joined the OS FP, this might not mean they are going to be using it for 800 gig but certainty they are going to use for 1.6 terra, which has already been designed, so we are looking at another stable form factor for hopefully the next 10 years.

And that already brings me to the end of my presentations. Do we have any questions?

WILL VAN GULIK: Thank you, that was fast, wow.

(Applause)

WILL VAN GULIK: So, let's see. I don't see anything online and I don't see anyone here around. Well, that was ‑‑ that was amazing, thank you very much.

(Applause).

WILL VAN GULIK: So next up we have got a lightning talk and Max is going to present that for us, so the floor is yours, my friend.

Max: So actually my name was not on the agenda, but in the interest of the Mediterranean alliance they send someone Italian instead of a Greek, the original person supposed to be presenting, together with John, you can see that on the website, are coming up with this IXP neutrality project which is supposed to try to shed some light on how IXPs can be more neutral or how that should be. I don't have any more details because it is still in the very early phases and so you can check the website if you are interested in participating, they are still looking for some help. They are receiving some help also and you will see that on the website from the Internet Society because we like to see neutrality on the Internet being a standard, something that happens, and if you want to know more, check the website, send an e‑mail to Mikalas because of you know him, so this is all I had to say, in the interests of time, thank you very much.

(Applause)

WILL VAN GULIK: Okay. So now we have got our next talk, and Remco is going to tell about our death, I suspect.

REMCO VAN MOOK: I normally don't go and do presentations during my own Working Group but I am here at the insistence of my co‑chairs, this is a presentation I have done at an RS NOG in November last year. This is the summarised version of it. It's not an uplifting topic, but here is what got me started on this.

You are now negotiating with a computer if you are a peering coordinator, how does that make you feel? If you are trying to sort out peering of Google, you have the friendly web interface that will say yea or nay and you might be able to shoot in a ticket if you disagree with the outcome.

And that's getting more and more ‑‑ people are trying to make a scale ‑‑ Hi, Florence. So, what's the evolution here? I mean, peering coordinator seems to almost be a bit of a dying art, there is large scale consolidation happening both on the IXP and consent side and the job openings that are showing up, it's completely new companies or trying to make an inroad or actually have a strategic negotiator level position, and the actual number of people here in this room has not increased with traffic, so either really good at scaling or really poor at planning. So, an interconnection decisions used to be beers and mates and having a drink and you know what, we will run a cable together and that's it. But that's now sort of become a strategic decision for a lot of companies, and instead of like hey, we are both on this exchange, let's peer, you are now having conversations please give us free space and power across your entire footprint and we will put it in some caches, right? Some companies haven't had had the peering coordinator position for half a decade, by now. And what else is going on is the tragedy of least coast, only the largest access networks are now making money on the interconnection. I mean, they try to claim differently but, you know. The SLA of the transit provider you are buying from mostly covers availability and congestion and they are usually trying to solve for traffic volume and not so much for performance because they are not really interested. And to make all of that worse, Covid came, and Covid kicked it really because traffic problem is a solved problem unless you are in Italy and your Netflix got downgraded, and it's usually cheap. But online collaboration is not so much about traffic volume; it's you can't really cache Microsoft teams meeting ‑‑ you could, can you hear me now? But it's not how that works. And you are seeing this massive transition from people just consuming YouTube to producing consent because that's what you do when you join a Zoom call, in teams or whatever, and all of a sudden your upstream quality which is sort of an afterthought in a lot of topologies is now becoming important and making things worse again for access ISPs, all of the traffic is moving from the observable protocols that you can optimise, to all sorts of low latency encrypted stuff. And for the applications you are seeing roundtrip criteria are becoming more commonplace. 50 milliseconds roundtrip for online games to function is not entire uncommon and if you look at stuff like the remote gaming like Google stay I can't ‑‑ there is ten others, the metaverse sub 20 milliseconds is coming and how do you do that? The Internet of services. You need ‑‑ identifying what piece of your interconnection, how you connect to the rest of the world, which one is under‑performing is getting harder because it's no longer about congestion and you get more diversity and there is exchanges popping up and data centres, your content is getting closer to the edge, which means there is more place where is it appears from, and to make things worse still, your end user experience is only partly determined by the initial traffic destination, it's nice if you have like a front end to Cloudflare that is accessible within two milliseconds but if the back‑end in that ends up, it's going to be disappointing for most of the world and I was in a hotel in Dubai a month ago and the portal I needed to use to log into the wi‑fi was a bit buggy and slow and I was sort ‑‑ started to pay attention to the DNS names that scrolled by, and I figured out that in order to log into the wi‑fi in Dubai, AWS US 1 was involved, get of disappointing.

And then we move on. How much do you know about your own network? You probably know a lot, you should, it's your job. You have your network monitoring, you have a bunch of flow analysis and you have interface statistics. If you have ‑‑ if you are an ISP; you probably know something about your top talkers, your loudest customers. And how much do you know about the destination networks? Yeah you know which direction you are sending traffic for them, you probably even know where the traffic is coming from if you get it from them. Beyond that, you don't really know all that much. So, really, from ‑‑ moving on from like the olden days of let's just get some capacity and we are all good, we are going to shift from a volume based approach to a value based approach so having great Netflix experience or my YouTube streaming in, that's all very nice, but if Teams no longer works and I have had two kids doing home‑schooling on Teams, if that breaks, not just the school has a problem I have a problem as well. If Robolox doesn't work, are going to walk away if they don't get access to Oracle, what does the base want the network to deliver? And this is something that I think we all need to start thinking about or start evolve the way we do this around. So, it's no longer about just adjacencies, it's also about traffic and the key applications. What are people actually using my network for and how can I make sure that their end user experience is optimised? How do I keep people happy running Teams even though they have to use Teams? So, Cloud providers are taking more and more of this. You need to have them reachable to 150 milliseconds. Your packet loss needs to be under or very minimal number. So, you pull all of that together and you look at what are the components of end user performance? Latency and jitter, those are pretty standard, path BGP trace routes, you know, and DNS, time to first answer, black holes, DNS is actually pretty good at black holes, recursive servers. Let's briefly go through some of them, as I said this is actually said this is a summary of a far longer presentation, if you look at the keynote file that is on the archive, you will see a whole bunch of hidden slides, you are welcome to have a look at.

What does latency look like? So here is an interesting thing: This is from a country in south‑eastern Europe and this is the traffic pattern over two weeks. And there is an interesting thing here because this is a network that would normally not be seen as congested, because none of your interconnection links with full, and the bottom pink ‑‑ or purple line is your average response time, and that's all looking pretty decent, it's all less than 50 milliseconds, there's no great variations for time of day or anything. But if you actually start looking at but what is ‑‑ so that's the 50%, or that's the average. But what if you are looking at 70 percentile, 90 percentile, what's the worst 10% that all of my customers are getting and all of a sudden latency shows up, congestion shows up, and this is not congestion you are going to find by looking at your interface statistics because this is hidden in the buffers of your expensive junipers, but this is congestion that your customers still experience, even in congestion you see all the peaks, happen every 24 hours.

Yeah, yeah, yeah. I should have taken this out. Let's make it worse, measuring DNS. I am not going to spend too much on this because Geoff Huston did an excellent presentation in RIPE 83 about the DNS Cloud. I think why is DNS now important for you? It has become the service directory, it is no longer about who is YouTube. Who is YouTube for this? This particular customer in south east London is what it's now about and the DNS outcomes vary greatly on which DNS resolver your customer is using, and bear in mind, even well configured DNS set‑ups, the success rate of them is only 98 percent and don't take my word for it, this is a number that came from my friend who apparently knows a little bit about DNS, and any wrong answers here can completely eliminate any network optimisation you have tried. DNS caching is a bit of double‑edged sword and to make things even more painful we have all these external DNS provides, AAAAs, Google Cloudflare, quad 9, any number of them, and they all ‑‑ they are all trying to optimise the result for your clients, but there is also other people trying to optimise traffic in a different way because they know that people with this/25 are in this particular suburb and that means this node should be used, but then if you have like a resolver cache that doesn't know about this and just blunty goes YouTube, that's in Italy, you are done. So, brief conclusions:

Interconnections no longer just about volumes and next HOPs, applications are moving to the Cloud and they complete ‑‑ require a completely different approach to how you optimise their delivery. The only way to get any of this done in a scaleable way is automation, which means that we should probably find another reason to go and have beers with each other and the things you are currently measuring are your network, are not telling the whole story. I know this is a very compressed, I hope this is ‑‑ this has piqued your interest, I would love to take this discussion forward at a next Working Group. Thank you.

(Applause)

WILL VAN GULIK: So, thank you very much, thank you very much for that. Unfortunately, we don't have time for questions, so that will be for the next one. So now we are getting to the end of our Working Group session, so thank you all for being here, we don't know yet where we are going to be on the next meeting, but we hope that we will see you all there and that Florence can join us maybe on the next time and besides that, I wish you all a really good lunch and please remember to rate the talk. Thank you very much.

(Applause)

LIVE CAPTIONING BY AOIFE DOWNES, RPR
DUBLIN, IRELAND