RIPE 84

Archives

Open Source Working Group

Wednesday, 18 May 2022

At 10:30 a.m.:

ONDREJ FILIP: Good morning, everyone. We are just about to start so if you can sit down and fully concentrate. Because this is the best Working Group you can hear at this meeting, so you should concentrate. I'm still waiting for some people who are just joining the room. And the door are closed.

Good morning, everyone. I am one of the co‑chairs of the Working Group, I was chosen to start the meeting because I wear a jacket and they told me I have to do it in this case. We have, you know, since the last time, it was roughly more than two years ago but since the last virtual meeting we have a change, we have a newcomer aboard so I would like to welcome Marcos Sanz, so please help me with welcoming him. I will let him say a few words.

MARCOS SANZ: Thank you, I am glad to be here and glad to be back at the RIPE meeting and this is my term starting today, so I'm excited and long live Open Source, thank you very much.

ONDREJ FILIP: Thank you. So, I hope ‑‑ I am sure he is a very good addition to your team so thank you very much for joining and volunteering for this job. We have a pretty full agenda, we had the opposite problems, we had to export some of the talks outside so we have no room for lightning talks but that's how it is when we are limited to one hour. First of all, let me finalise the agenda, and three presentations and other addition to the agenda, you would discuss something internally? I don't see any reaction and I did not expect any, but just in case, you can raise your hand.

We need to approve the minutes from the previous meeting, those minutes were circulating long time ago, we haven't received any comments but there was not any complicated decision so it was just more presentations but just in case. If you read the minutes and you have something against, this is the last moment you can protest. Again, I don't see anything. Also, we should review action list but we are so great that we do all the actions inside the meetings so we don't have any at the moment.

Last but not least, let me comment you one thing, some people ask me I couldn't request speaking at this Working Group because I didn't know how, so please, there is a mailing list that we usually send requests for presentations and, which is the right place to discuss all the matters relating to this Working Group so please join the mailing list, that's how you will be kept ‑‑ you will get into contact within this community and you will see all the important information related to this Working Group.

And that's it. I will ask Martin to introduce the agenda and I think we can start.

MARTIN WINTER: What you get for not dressing up, I'm the last one to speak. Anyway, we have already a full agenda. It's kind of interesting, especially as it's a physical meeting again, thank you very much for the ones that submitted the agenda items, it was way easy to fill up and find interesting talks this time compared to the last time where we seen the virtual meetings not that kicking in. We have first from RIPE, we have from the RPKI core and Open Source talk, I believe unfortunately he is remote so we will see how well our ‑‑ how the video guys can handle that. Then we have one of my colleagues, Donatas Abraitis, talks about FRR, turned four years old since the Quagga in April. And what's going on there. And then talk about peering manager.

MARCOS SANZ: I hope Bart is online, from the RIPE NCC, they had some challenges, experiences to share while Open Source in that library so, I guess, Bart, the floor is yours.

BART BAKKER: Yes, thank you, good morning. I work for the RPKI team at RIPE and we run the RPKI Trust Anchor. Early this year we Open Sourced our Trust Anchor source code which is we call this RPKI core, it's the central software that we run on our Trust Anchor and like being said, we had some challenges along the way so I am here to basically talk about our approach to this and some of the bumps we hit on the way and how if Meetecho continues to the next slide we can talk.

Before we start talking about problems let's talk about why we Open Source this in the first place. I know this being the Open Source community, I won't explain all the standard reasons, we all love it. But since we run a Trust Anchor we have some of our special reasons that we Open Source it. Running a Trust Anchor requires a tremendous lot of trust from the community and untrust Trust Anchor is kind of ‑‑ so we see it as one of our purposes to earn from trust from the community and one of the things we do that's ISPing very transparent or having a very high level of transparency, examples of that show in that we publish our quarterly road maps and the security assessments we do on our websites in case we hit some outage or problems in production, we try to write extensive to the community, to show what we did and how we mitigated this for future purposes and now we also allow the community to review the code we run in production. We didn't Open Source this with the purpose for people to run it themselves especially in production, this is a RIPE project, it runs the RIPE RPKI Trust Anchor. On the other hand, anyone is free, of course, to run this themselves and we see that especially in Def set‑ups, it could be nice to run it on laptops.

How we started was by doing two security assessments. There was an external company we hired to do this and they are called radically open security. One of the assessments was a pen test and that included ‑‑ meaning they had access to the source code and it was on our RPKI but also included RIPE portal and other RIPE SSO projects. The other one was plain source code of the RIPE R production code. Some issues were found, all issues were fixed, the last project finished, there were other issues in December and all the reports are fully published on the website with the link you will find in the slides.

Now, if our mainline secure enough to be published, we hit some other issues. We had a compile time dependency on proprietiry HSM library for RIPE specifically, only available for RIPE, we can't publish it even in binary form to the outside world and even though we don't expect people will run this software themselves, not in production we want everybody to be able to build this software on laptop or systems and if they want they need to be able to run it. We had to make this an optional dependency for anyone to be able to compile our source code and then get without this library.

After that, we implemented monitoring and metrics on if the library is present because the one thing we want to avoid is due to some errors in RCI systems the HSM library is not available on our production systems so we added a large space on that as soon as we would deploy a version that does not include this HSM library for every reason we are notified and can fix the problem.

We hit some run time dependencies mostly on RIPE NCC Services, the resource caches is one thing and some LIR information is the other one. These surfaces are not reachable from the outside world and they would be useless because it requires authentication, obviously our internal systems can properly handle these secrets and have keys to connect to them but we can't allow the entire world to connect to our services so we added another resource cache implementation that uses a file for anyone to use or run and modify the contents of the resource cache. Also, we made the LIR information optional. We mostly use it for having nice names on certificates so we made this fully optional if you can't connect to the LIR information services that.

Another dependency we have is on Trust Anchor software, Trust Anchor signing software, that is necessary to start up an initialised Trust Anchor so as part of this process we Open Source that project to, later in slides I will share a link to that.

So basically, at this point, our production code was good to go and was safe enough to publish to the world but this being a 10 plus year project, we had a lot of these commits and each of these commits may hide a lot of secrets. These secrets are not necessarily passwords and things we can easily rotate but it's ‑‑ it encompasses anything we don't want to share with the public. And our mainline is the secrets all have been removed but they still exist or may exist and go unnoticed in some of the commits in our history. There is over 4200 commits, about 10% of them are merge commits so there is almost 4,000 commits to review if we actually wanted to do that. At the moment of publication, it was almost 90,000 lines of source code, about half of that were Java lines so this would be a huge job and kind of unfeasible to review all of the commits in our history. So, by publishing our full history, we would risk exposing these internal secrets and that's not a risk we were willing to take so instead of pushing the entire history we decided to publish a new tree, with the code we know was good to go we created the nice baseline, the new tree contains the code on February 9th, running in production on our Trust Anchor and instead of moving our entire to GitHub our team works on the internal repository because the history does matter to us, it helps to be able to look back and see what's changed when and why it's changed.

So we continue working on internal repository and our changes are now automatically published to the GitHub repository whenever we deploy things to production. We have a script that transplants all of the commits based from our internal repositories on top of the new baseline, it lists all of the commits in the body of the commit message and then it pushes to the GitHub repository and it does that fully automated. The script is listed on the slide, it's available in the repository as well so in case you are interested, you can review that.

And this is what these commits look like. We list the commit one that we merged and now run in production and in the body of the commit we list all of the commit summaries that are included in this change set. And this is how we address the issue of not being able to share our history and how we worked around that. Our development process can still be followed, we believe, from these commits. We aim to keep these changes small so they can be easily reviewed and our development progress can be easily followed on GitHub as well. The trade‑off here was we couldn't move our full development process to GitHub because of history.

Some more things we Open Sourced either recently or a long time ago, like mentioned earlier, the RPKI Trust Anchor software was Open Sourced as part of Open Sourcing RPKI core because we needed it to be available to all of you to initialise the Trust Anchor. Project that's in Open Source from the start was much easier, therefore was the RFC8181 publication server, there is a RPKI comments library that we use both in the publication server and in RPKI core that's been Open Sourced and we have some tools we use to create a daily repository dump in statistics that we do it right.

So, as a result of this now anyone can follow and review the code we run on our Trust Anchor in production and we are very interested in feedback from the community, how can we make this more valuable from the community? If you have any ideas, please reach out. Otherwise, thank you for listening, I think there is a few minutes for questions if you have any, please ask them here or feel free to reach out, you can find me on e‑mail. Thank you for listening.

MARCOS SANZ: Thank you, Bart.

(Applause)

MARCOS SANZ: Personally I have found it very interesting, there were some trade‑offs to be made and I think you took very elegant solutions for the sake of transparency, that was very interesting. Are there any questions? Any ideas? Suggestions? Something online in Meetecho? No. Okay, one person coming to the mic.

SPEAKER: Tim working for DE‑CIX. Thank you very much, I like that you took the effort to Open Source this, even though I might not be able to review or use it. The point where I got some question‑marks in my head was that you stick with, as it seems, two different Git trees from now onwards. I understand that sometimes it's looking to the history which I also understand you wouldn't publish, wouldn't want to publish, that totally makes sense but so looking back makes sense sometimes but it seems a little bit clumsy that you keep that on from now onwards and you are not using the GitHub tree as well, I mean it's distributed version control so it wouldn't be a problem to have branches only local or something like this so maybe you could elaborate a little bit why you decided that because there is something that didn't come to my mind and which might be valuable for others thinking about Open Sourcing stuff they are using internally.

BART BAKKER: Yes, it's a fair yes. We decided to do this because internally we do sign our commits, it's individual auto commits and on ‑‑ of course we make branches, we make local branches, we use merge requests and often it's just very useful during development to see the history of a file or the history of what parts of our source code, including the original commit messages of the author on why it's changed. Having these two trees, if we would switch to GitHub for feature development it means that we would have very hard times to reach this history during the development process because we need to write some scripts and some ‑‑ do some transplanting between the original history of things and the GitHub history. I won't exclude this for the future, but for now, our history is just too important for us at the moment, like I said, maybe in a year we can revisit this, because a lot has happened, but in this year, we do internally review commits to also not include these secrets of course, so every commit we make, we made from the point of publishing is actually to go Open Source, so who knows what we can revisit in a year from now? And maybe we can publish or move our development process slowly to the full public. For now, like I said, it's just too much of pain for us to move it there.

MARCOS SANZ: Okay. Then, if no further questions, then the next speaker.

(Applause)

MARTIN WINTER: So next I would like to introduce Donatas Abraitis, he is one of the maintainers on FRR and he wants to talk a bit about what happened of the best if I have years.

DONATAS ABRAITIS: Today I am going to talk about where we are with FRR which is five years in ‑‑ has been created as Martin mentioned before. At the moment I am one of the FRR maintainers and working at NetDEF as software engineer and at Hostinger.

There are uses and releases are pulled every months cycle, in most cases. It is a development branch by splitting into two separate parts like freezing the development branch where no new features can be merged to two weeks later, we post ‑‑ new features can go in, two weeks later again we tag stabilisation branch with additional RFCtag and running additional pre‑release compliance test to make sure we don't have any regressions.

Two or more people from are assigned to be release managers and to rotate to get more people familiar with the flow and to avoid kind of struggling with coming releases. Since we don't have LT S releases, we are trying to back board critical bug fixes or misbehaving features to the one or two latest versions. We use Mergofyio, I don't know how to pronounce it correctly, and it kind helps to deal with back reports.

Some details about the RCI system. The first CI is the support on the supported distributions, builds the packages, PMs, snaps, whatever. The heavy lifting part is topology test, takes the most time, and the last part is kind of running analyser just to check if we don't have any new memory leaks, buffer flow etc., the average time is ‑‑ the average build time around two, two hours, and unfortunately 50% fail but due to extra other conditions like OS upgrades, repository network failures. Some topology tests fail mostly due to timing issues and need to be fixed somehow. I forgot to mention we use our FRR BT to make checks for commits. This is just a big picture of the latest important changes since 7.2 release. As an example a lot of missing features and RFCs were implemented in BGP, like extended measure support and refresh, multi‑homing, open delay time, lots of them, so I can't cover all of them in short talk so I grabbed a last couple of last known or even unknown changes and I am going to talk about them shortly.

How to configure BGP, joking. As mentioned before, I just took five less known changes and I am going to talk about in short.

First Lua hooks. The hooks allow you to do some actions and time when the event happens and in FRR still in the early stage but with more efforts this will be very promising in the future. At the moment we have a single hook for zebra but we have more for BGP and road maps which is kind of useful. At the moment requires 5.3 because all the existing C are based on 5.3. Unless you have a client's database with IP prefixes, and you want to put some extra data to the logs for they are parsed and sent to the aggregation or any storage like elastic search etc. So it can be held here.

Also we can specify a Lua script that is going to be called every date from a particular peer. In this example we set attribute at 333 for every newly received route and at the moment we can overwrite only met and by Lua but it's very easy to extend and the codes and encodeers should be extended as well. Technically, we can even implement our kind of local aware validator for the prefixes or so. Another one interested in daemon is SharpD ‑‑ it's something like a testing framework to generate millions of routes and do the performance testing you just specify the next address and the number of routes to be installed. For distribution works the same way as with any other protocol, it has adminstrative distance of 150. Can end also opaque data, one real‑life example, let's say, when I was implementing BGP extending national support and I needed tens of thousands of prefixes to be bundled for the same BGP update message to be above 4 K bytes, so it helped me to send over a file. Or want to verify how fast are processed, how many resources do I need or my router ‑‑ how does my router behave with millions of routes, what is tracing at first? Tracing gives an advance performance and analysis tool, but it is not just about the system praying like TCP dump, it's kind of more low level and can be used to trace down specific events and functions instead of just Cisco. I am going to talk about two tools, LTT and system tap, stands for Linux trace toolkit next generation. Support since 8.0 and by the way has anyone used tracing in FRR? Got it. So in order to use LTT you will to compile NTTLG flag and basically that's it, and below is an example how to use it. Before EBBF system tap I guess was the most powerful tracer, it can do much more like user test probing or kernel hacking. I have seen people even patching kernel life, modem system tap. As with TT ‑‑ you have to comply with flag and keep in mind that you can compile FRR with both support, I think LTT and GLTT, choose one. As an example BGP at the moment has around 30 trace points, but system tap allows you to hook on the ‑‑ whatever you want to, is he specific function or specific line even and it differs from LTT and G in that you compile the script like on the right side into the carrier module and you just load it and run it. But it comes with some safety tricks, you can crash or freeze the application, sometimes the kernel. The video should show up, I don't know. Long lived graceful restart is graceful restart extension for BGP, and it allows you to retain still routes longer than one hour, which is standard graceful restart. When graceful restart time expires, still routes get a well‑known LLGR stale community, routes are recalculated to make sure they are less preferral and routes that have a well‑known LLGR community are flushed immediately and others are kept for longer time as configured.

We can even set ‑‑ we can even disable our standard graceful restart by setting restart time to zero and that means it's totally skipped and long lived grateful restart is activated instead. We have it since 8.2, if I remember correctly. As an example BGP can send multiple identical announcements with an empty community attributes at ‑‑ why do we need to send an update if it's not an actual change the egress? That's an example, X1 receives two paths with communities 2 and 3 that are ‑‑ Y2 and Y3, in order to induce an update for prefix B, we just disable the link for Y1 and Y2 and wait for all this at C1. When X1 sends an update to C 1, it strips all the communities and we didn't need to send an update message to C1 because it's ‑‑ the attribute is actually the same, so it's enabled by default in FRR since 8.0 release and grateful restart, route refresh are exceptions and updates are forced to be sent. Okay.

Four years ago, at Hostinger we missed the feature in ‑‑ Linux to set a metric for a community in default ‑‑ then I started looking at the code and discovered it's not implemented and it might be fun to do this, and but before that, at that time I didn't have any experience in coding or even in language but I was still motivated to go ahead and learn new stuff, at least how protocols were from the developer perspective. And I want ask you, do we have any contributors here to FRR? Not much. How many of you use FRR? Okay. So I will provide a quick overview how you should start contributing, just the first step is just creating your own fork of FRR repository, create a separate branch, do these changes, apply code formatting because otherwise you have to fix it anyway. Do some local unit tests, I mean not do but run, they are ‑‑ and commit changes. By the way, every new CLI change or new protocol daemon whatever, requires documentation and topology test, it's kind of useful to have in advance.

Then just create a full request on the hub based on the development branch and in a couple of seconds, a few tests start. As mentioned before, 50% of the fail but fortunately we can rerun them by triggering on GitHub by writing a comment like C I rerun and the reason for that. When it's green it's a good sign to start reviewing. Do expect that some can be hanging for longer time because some changes requires ‑‑ require more attention, discussions or so. Poor requests marked as draft are skipped from Lua. We have ‑‑ every Tuesday, we have a technical community call. You can join it as well to discuss recent PRs and issues. I suggest joining FRR routing SLAAC, mailing list also is but they are (slack) they are kind of abandoned. The most interesting part.

We have around 23,000 of commits and around 7,000 pull requests merged. Kind of steady growth. This is the graph to show how many commits on poor request we have to release, as we see in the timeline 8.0 is ‑‑ was a huge release but usually 1,000 and 500 requests releases a normal space. The merge close pull requests is quite good when dealing with large number of pull requests, closed in some cases can be a real fix but it can be a duplicate or the author has requested for more changes etc., sometime is over, I have to speed up. Closed is kind of the same for the last year, we closed more issues than was created. Real profit.

This is the unique number of contributors per release. Again, 8.0 pull. Def remain the winners when talking about the battle when talking about the number of commits per organisation. And thanks, I can ‑‑ I think I can ask for a couple of questions if the time is okay? And by the way, we have Martin, two Davids and they know much more than me so you can ask what you like about the FRR.

MARTIN WINTER: Any quick questions?

MARIA MATEJKA: Cz.nic. I would like to ask a performance question: If you are implementing a lower BINDings, have you measured how long does it take? We were trying to do it in bird and it totally failed, on initialisation of dual ‑ context.

DONATAS ABRAITIS: Personally I didn't, maybe David can.

MARIA MATEJKA: Thank you.

DONATAS ABRAITIS: Lua was implemented as a GSOC project but I don't know about any performance testings.

MARTIN WINTER: Thank you

(Applause).

ONDREJ FILIP: Thank you very much. And the next speaker is Guillaume, going to talk about peering manager, the two was introduced two years ago.

GUILLAUME MAZOYER: In Marseilles.

ONDREJ FILIP: Lets see what happens over these four years.

GUILLAUME MAZOYER: I know my name is hard to pronounce, for most of you at least. The lead developer of peering manager, the creator of it. Just a quick poll. How many of you know about this software already? Okay. Who actually uses it? Okay. So I do have some users, okay.

So peering manager, was introduced in the RIPE meeting in Marseille as a night thing in talk. It's a project which is not related to any company and written by engineers for engineers mostly and to provide some peering management, as you can guess.

Under the ‑‑ it's written in Python with if you run NetBox, it runs pretty much in the same way. It works with post GRE SQL as database and cache, scheduling tool, you can quickly start an instance on your home without any trouble. The rest API is a huge work in peering manager for you to integrate any automation workflow you have already. But we do have a NA PALM integration so you can push configuration on your router on your own and it integrates with other tools like peeringDB, and recently we disturbed some features about IX‑API and to interact with it. And of course it's under a licence and available on GitHub.


Why did I write this? It was mostly started as a side project for my own needs and decided to Open Source it for ‑‑ to make people able to track any BGP configuration changes, updating session, deleting sessions and adding them and basically operating BGP on a day‑to‑day basis. The idea was to eliminate all the YAML and JSON stuff, we can have in huge files, but actually in the end, I am rewriting some features to be able to dump data into YAML and JSON. And it provides also user interface for day‑to‑day management. It's focused on BGP, it means that there is no interaction with other routing protocols or stuff like that, so it's really specialised on BGP and tries to do it properly, at least.

There is not a lot of test to describe but you can actually see interacts with peeringDB or IX ‑‑ there is a PI cache front end with net cough con of you are at the bottom right because users needs to do something at least.

Features to come, we are going to improve the IX‑API, the peering workflow, this is something I have been asked a lot of time to be able to receive peering requests from external party, and being able to approve them, provision them automatically and stuff like that, export dump plates for, to Git repository to do stuff like that, and in a strange way, a lot of users asks for mailing features, I don't like mail so, yeah, whatever.

But let's have a look at it for real. I took some screenshot and it's going to be a bit ‑‑ yeah, yeah, not that good, okay. But it does look like this. You can't read it but you can see the shapes. So, for instance, you have some tasks that can run in the background and you have some reporting for you, have some session, some autonomous system to create, it basically pulse the data from peeringDB so it pre‑fills the form for you so you just ‑‑ most of the time you just have to click "create" and that's it. In the same way you have, yeah, the details, you have to see what ‑‑ what we know about your peers, what peering manager knows about your peers, who it can discover, thanks to peeringDB as well, all the peering session you have ‑‑ all the IXP peering session to set up on your site, so, for instance, if you are new to peering manager you just enter your own AS number and then when you are go into the internet exchange tab and you select "import from peeringDB" it's going to get from peeringDB all the ISPs you are connected to and provision them automatically so no need for a lot of typing or clicking everywhere.

You have your session, of course. Also, they can also be discovered through peeringDB, through issue on an IXP, you say I want to peer with any autonomous system and just going to list all the autonomous systems that there are on the IXP and you can provision the session automatically, just with a few clicks of a button.

It was heavily focused on IXPs in the beginning but now peering manager also manage session in global fashion, so you can add session to your customer, to your transit provider, to Cloud exchanges, whatever. It can do pretty much everything. I don't recommend you to document your iBGP in it because can be do a lot of weird stuff and do you really want to break your iBGP? I am not sure about it.

It discover connections, of course, to your IXP, but this was a future, I wrote for myself because I was and decided to ‑‑ decided to generate the configuration for my routers because I basically had no automation also at the time and this is also the most used feature of peering manager now, it can ‑‑ there is a Jinja based system inside peering manager and it generate configuration for your routers, based on what peering manager knows about your BGP sessions and peers. So there is a huge feature for it. It can take a lot of time to generate the template actually, because if you have ‑‑ I don't know, hundreds of thousands of peering sessions, of course if you put them on the router, the config is going to be quite long and is going to take a lot of time to generate.

And, yeah. You still don't see it but you see the shape again, so this is what the peering manager instance looks like when it's used so you have a bunch of numbers and also a change log so you know what did ‑‑ who did what, when, so you are able to audit your BGP changes, thanks to this as well. All the features you have seen on the screen in the user interfaces are also available in the API as well so you don't actually need to use this HTML stuff, if you want you can just put it inside your ought make workflow using the API and it's pretty much the same and you can do a lot of stuff with it as well.

Of course it's a project, if you want to open it, open issues per request, sponsor features. I would like to thank a lot DE‑CIX for its sponsorship, they believed in me since a lot of months now and it's actually really helpful to have them on my side. I have a few other sponsors and I have some hidden ones that don't want to appear in this beautiful advertisement. And there are a lot of stuff to do, I am pretty alone for now, but if you want to help and if you want to break some code, feel free to reach out to me.

And if you have any questions, I will be here to help you, and I will still be available outside if you want to talk to me. If you are a user come say Hi, it's always a pleasure to meet you.

ONDREJ FILIP: Thank you very much, first of all.

(Applause)

ONDREJ FILIP: I don't see any remote questions, so let's start with one physically in the room.

WOLFGANG TREMMEL: Working for DE‑CIX academy. First, thank you for the software, it's really great, and just to remark more than a question. The first hurdle I had when I started using it was writing a template so the second thing after writing a template was writing a template tutorial and I would like to encourage other users of peering manager to put their templates into the repository so new users can learn how to write templates because that's the big hurdle if you start using it, you have to write a template to basically push the information on to your routers. Thank you.

GUILLAUME MAZOYER: Thank you, Wolfgang, for the huge work done on the documentation, of course, because, you know, we are Open Source software. The documentation is a code, you know, and Wolfgang and Julian here helped me a lot in documenting and templating stuff and of course for Cisco, I am poor guy ‑‑ I am a Juniper guy, thank you for the Cisco dump.

SPEAKER: From ‑ telecom. We bump into each other yesterday in the elevator, I want to publicly thank you for the software. We are users of it, and we just folks on using peering manager as a soft of truth, so the provision mechanism for us is something external and something different, we really appreciate the rest API and we would like to have stability in the rest API as much as possible, if possible, of course

GUILLAUME MAZOYER: I know I break a lot of stuff. Sorry for that.

SPEAKER: Dan ISC. We have a slightly unique network in that we peer with a unique AS per site, we have several dozen ASs because we are on Anycast provider and like to track. Is there any support to be able to do that or reliant on a single peering AS?

GUILLAUME MAZOYER: It's a good question. In the very beginning it was only made for one AS so if you wanted to peer with ‑‑ if you had multiple AS, you had to have multiple instances.

Dan: That was my fear

GUILLAUME MAZOYER: Now, that's the case any more, so you have some kind of a context and stuff like that for all the ASNs you have.

Dan: This wheel has been tried to reinvent many, many sometimes and false starts and that's one of the most solid implementations I have ever seen of it.

ONDREJ FILIP: Any other question? Not even remotely. So thank you very much.

GUILLAUME MAZOYER: Thank you.

(Applause)

ONDREJ FILIP: Trust me or not, I know it's horrible but we just reached the end of the session. But that's not for the last time, I am sure you will see us next time, so I don't know, guy, do you want to say something nice at the end of the session.

MARTIN WINTER: Thank you for all the ones who submitted presentations at this time. I hope we get even more next time, we are trying to get more time, a little bit more, especially if it's another physical, which I really hope for. So bring up the good ideas and don't even wait for the call for presentations, if you have some ideas feel free to send an e‑mail to the Working Group Chairs any time.

MARCOS SANZ: Thank you all, and thank you stenography, I was really missing you, thank you for being back. Bye‑bye.

LIVE CAPTIONING BY AOIFE DOWNES, RPR
DUBLIN, IRELAND