My Experiences With Cisco's VIRLSince it has been out for more than a year, and has been developed and improved tremendously during that time, I decided to finally take the plunge and buy a year’s subscription to the Cisco VIRL software. Part 1: Comparing and Tweaking VIRLUntil now, I have been using any combination of real hardware, CSR1. Vs, and IOL instances for studying and proof of concept testing. My first impression of VIRL is that it is a BEAST of a VM with regards to CPU and RAM consumption. I installed it on my 1. GB Mac. Book Pro first, and allocated 8. Quite many people don’t pay attention to the difference in handling packets on interfaces configured for NAT inside and outside. Here is an example to demonstrate. INE is an industry leader in CCIE Training for CCIE Routing & Switching, Collaboration, Security, Service Provider, and Data Center lab preparation. GB to it. However, its use was very limited as I was unable to load more than a few nodes. I then moved it to my ESXi server, which is definitely more appropriate for this software in its current state. I knew that the CSR1. Vs were fairly RAM hungry, but at the same time they are meant to be production routers, so that’s definitely a fair tradeoff for good performance. The IOSv nodes, while they do take up substantially less RAM, are still surprisingly resource intensive, especially with regards to CPU usage. I thought the IOSv nodes were going to be very similar to IOL nodes with regards to resource usage, but unfortunately, that is not yet the case. I can run several tens of instances of IOL nodes on my Mac. Book Pro, and have all of them up and running in less than a minute, all in a VM with only 4. GB of RAM. That is certainly not the case with IOSv. Even after getting the VIRL VM on ESXi tweaked, it still takes about two minutes for the IOSv instances to come up. Reloading (or doing a configure replace) on IOL takes seconds, whereas IOSv still takes about a minute or more. I know that in the grand scheme of things, a couple of minutes isn’t a big deal, especially if you compare it to reloading an actual physical router or switch, but it was still very surprising to me to see just how much of a performance and resource usage gap there is between IOL and IOSv. Using all default settings, my experience of running VIRL on ESXi (after going through the lengthy install process) was better than on the MBP, but still not as good as I thought it should have been. Ine Multicast Deep Dive Download YahooIne Multicast Deep Dive Download YoutubeThe ESXi server I installed VIRL on has two Xeon E5. CPUs, which are Nehalem chips that are each quad core with eight threads. The system also has 4. GB of RAM. I have a few other VMs running that collectively use very little CPU during normal usage, and about 2. GB of RAM, leaving 2. GB for VIRL. I allocated 2.
GB to VIRL, and placed the VM on an SSD. The largest share of CPU usage comes from booting the IOSv instances (and maybe the other node types as well). The issue is that upon every boot, a crypto process is run and the IOS image is verified. CCIE Blog provides top technical, informational, and success articles from the leading CCIE trainers in the world. Scientists have understood that microbial fuel cells (MFC) can generate electricity from urine and other forms of waste for a while now. But new research shows that. Account Options. Sign in; Search settings; Web History. This pegs the CPU at 1. This is what contributes the most to the amount of time the IOSv node takes to finish booting, I believe. This may be improved quite a bit in newer generation CPUs. When I first started, I assigned four cores to the VIRL VM. The IOSv instances would take 5- 1. Performing a configure replace took a minimum of five minutes. That was definitely unacceptable, especially when compared to the mere seconds of time it takes for IOL to do the same thing. I performed a few web searches and found some different things to try. The first thing I did was increase the core count to eight. Since my server only has eight actual cores, I was a little hesitant to do this because of the other VMs I am running, but here is a case where I think Hyper. Threading may make a difference, since ESXi sees 1. After setting the VM to eight cores, I noticed quite a big difference, and my other VMs did not appear to suffer from it. I then read another tweak about assigning proper affinity to the VM. Originally, the VM was presented with eight single- core CPUs. I then tried allocating it as a single eight- core CPU. The performance increased a little bit. I then allocated it properly as two quad- core CPUs (matching reality), and this was where I saw the biggest performance increase with regards to both boot time and overall responsiveness. My ESXi server has eight cores running at 2. GHz each, and VMware sees an aggregate of 1. GHz. So, another tweak I performed was to set the VM CPU limit to 1. GHz, so that it could no longer take over the entire server. I also configured the memory so that it could not overcommit. It will not use more than the 2. GB I have allocated to it. In the near future, I intend to upgrade my server from 4. GB to 9. 6GB, so that I can allocate 6. GB to VIRL (it is going to be necessary when I start studying service provider topologies using XRv). I should clarify and say that it still doesn’t run as well as I think it should, but it is definitely better after tweaking these settings. The Intel Xeon E5. CPUs that are running in my server were released in the first quarter of 2. That is seven years ago, as of this writing. A LOT of improvements have been baked into Xeon CPUs since that time, so I have no doubt that much of the slowness I experienced would be alleviated with newer- generation CPUs. I read a comment that said passing the CCIE lab was easier than getting VIRL set up on ESXi. I assure you, that is not the case. The VIRL team has great documentation on the initial ESXi setup, and with regards to that, it worked as it should have without anything extra from their instructions. However, as this post demonstrates, extra tweaks are needed to tune VIRL to your system. It is not a point- and- click install, but you don’t need to study for hundreds of hours to pass the installation, either. VIRL is quite complex and has a lot of different components. It is expected that complex software needs to be tuned to your environment, as there is no way for them to plan in advance a turnkey solution for all environments. Reading over past comments from others, VIRL has improved quite dramatically in the past year, and I expect it will continue to do so, which will most likely include both increased performance and ease of deployment. Part 2: INE’s CCIE RSv. Topology on VIRLVIRL topology + INE RSv. ATC configs. After getting VIRL set up and tweaked to my particular environment, my next step is to set up INE’s CCIE RSv. I will be using VIRL for the most, initially. I was satisfied with using IOL, but I decided to give VIRL a try because it not only has the latest versions of IOS included, it has many other features that IOL in itself isn’t going to give you. For example, VIRL includes visualization and automatic configuration options, as well as other features like NX- OSv. I was particularly interested in NX- OSv since I have also been branching out into datacenter technologies lately, and my company will be migrating a portion of our network to the Nexus platform next year. At this point in time, NX- OSv is still quite limited, and doesn’t include many of the fancier features of the Nexus platform such as v. PC, but it is still a good starting point to familiarize yourself with the NX- OS environment and how its basic operation compares to traditional Cisco IOS. Likewise, I intend to study service provider technologies, and it is nice to have XRv. I configured the INE ATC topology of 1. IOSv routers connected to a single unmanaged switch node. I then added four IOSv- L2 nodes, with SW1 connecting to the unmanaged switch node, and then the remaining three L2 nodes interconnected to each other according to the INE diagram. The interface numbering scheme had to change, though. F0/2. 3 – 2. 4 became g. I built this topology and used it as the baseline as I was testing and tweaking the VIRL VM, as described in Part 1. I was familiar with how the topology behaved in IOL, as well as with using CSR1. Vs and actual Catalyst 3. After getting things to an acceptable performance level (e. If you stop the simulation completely, the next time you start it, everything is rebuilt from scratch. If you stop the node itself, and then restart it, all configurations and files are lost. There is a snapshot system built in to the web interface, but it is not very intuitive at this point in time. Likewise, you have the option of extracting the current running configurations when the nodes are stopped, but this does not include anything saved on the virtual flash disks. Some people prefer having a separate VIRL topology file for each separate configuration, but I find it to be more practical (and faster) to use the configure replace option within the existing topology to load the configurations. Luckily, the filesystem on the VIRL host VM is not going to change between simulations, and all of the nodes have a built- in method of communicating with the host. This makes it an ideal place to store the configuration files. I went through and modified the initial configurations to match the connections in my VIRL topology. You can download the VIRL topology and matching INE configurations I assembled here. For the routers, this included replacing every instance of Gigabit. Ethernet. 1 with Gigabit. Ethernet. 0/1. The switch configs were a little more involved and required manual editing, but there’s not nearly as many switch configurations as there are router configurations. After getting the configuration files in order, I used SCP to copy the tar files to the VIRL VM using its external- facing (LAN) IP address. I placed the files into /home/virl/. Originally, I added an L2- External- Flat node to match every router and switch in the topology so that each node could communicate with the VIRL host VM. However, someone pointed out to me that there was a much easier way to do this: click the background of the topology (in design mode), select the “Properties” pane, then change the “Management Network” setting to “Shared flat network” under the “Topology” leaf. This will set the Gigabit. Ethernet. 0/0 interfaces to receive an IP address via DHCP in the 1. This setting only applied to the router nodes when I tried it, so I still had to manually edit the configurations of the IOSv- L2 switch nodes.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. Archives
August 2017
Categories |