<get>
operation to validate the changes I made in a programmatic way. Here we are!
I will show you the following two different options for the config validation:
✅ IOS-XE config validation with Native YANG model
✅ NX-OS config validation with OpenConfig YANG model
Let’s recap first what I did in the previous blog post. For the initial lab setup please refer to my DevNet Expert Lab on Cisco Modeling Labs GitHub repository. Then I did the configuration changes described at the previous blog post NETCONF XML Payload with YANG models. In short I configured the interfaces with IP addressing and the BGP configuration described on the diagram below.
Screenshot 1: NETCONF Lab overview in Cisco Modeling Labs.
Each of the three devices which are Router1, Router2, and Nexus1, have their own BGP AS and neighbor configurations on all interfaces except the management interfaces. In my previous blog post I validated the changes using Cisco YANG Suite with the build-in function to run RPCs (Remote Procedure Calls) to the devices in the web browser. This is not a very efficient way to validate the configuration changes and BGP neighbors. Now let’s continue where I ended and validate that the configuration changes were applied in a programmatic way.
You can find all files from the examples used in the previous and this post in my GitHub repository netconf-example. If you face into any issues with the setup or if you found any errors please let me know and/or leave a comment using the GitHub issues.
First we do a short recap about the BGP neighbor states that we can understand the returned data before we move on. BGP uses the Finite State Machine (FSM) to maintain a table of all BGP peers and their operational status. The BGP session may report in the following states:
We want to see the Established state, because in this state, the BGP session is established and BGP neighbors exchange routes via Update messages. For more details about BGP take a look at the sample chapter BGP Fundamentals from the book Troubleshooting BGP: A Practical Guide to Understanding and Troubleshooting BGP available at Cisco Press.
Now we will use a RPC get request to check the BGP neighbors. With the help of Cisco YANG Suite again we will create the filters which we will then use within our Python script, but let’s first check using the built-in RPCs function. As mentioned in the previous post, YANG Suite provides a set of tools and plugins to learn, test, and adopt YANG programmable interfaces such as NETCONF, RESTCONF, gNMI and more. I am using the DevNet Expert Candidate Workstation VM on which Cisco YANG Suite is already installed and is available on http://localhost:8480. The YANG module sets for IOS-XE and NX-OS we will use now are already configured. For installation option please refer to the Cisco YANG Suite documentation.
Please note that I mentioned about creating the YANG Sets during the previous post. Check the Getting Started section from the documentation. Create Device profiles and download the supported YANG models from the devices or upload YANG model files from your workstation or from public YANG Github repository at the Setup menu on the right. After that you can create YANG module sets for IOS-XE and NX-OS with the supported modules you need.
In YANG Suite go to Protocols –> NETCONF, select IOS-XE on the YANG Set from the Dropdown menu, search and select the Cisco-IOS-XE-native module and load the modules. Then choose the NETCONF operation, <get>
in this case, and select Router1 as device. Before browsing through the YANG tree, click on the “YANG Tree -> Options” and choose NETCONF XML (RPC parameters only) from Display as RPC(s) as to show only the configuration payload parameters. We don’t need the other XML overhead.
Browse the YANG tree and look for the router container. Expand it and move on to the ios-bgp:bgp list element which also needs to be expanded. Then you will find the ios-bgp:neighbor list element. Click the checkbox in the value column and then generate the filter using the Build RPC button. The filter should look like screenshot 2 below.
Screenshot 2: Validating BGP Neighbors.
Next click the Run RPC(s) button to use the filter. A new browser tab will be opened where you can follow the complete RPC with all details including the output data of the BGP neighbors.
Screenshot 3: BGP Neighbors reply.
Looks good, right? The BGP neighbors are there. But wait, the output tells us that the configuration we pushed via NETCONF was applied to the device, but it does not tell us if it is also working and forming BGP neighbors. We used the get operation to request configuration data and not the operational data. This is similar to a show run
command. It returns only the configuration data from the device.
Router1# show run | section bgp
router bgp 65001
bgp log-neighbor-changes
neighbor 10.0.10.2 remote-as 65002
neighbor 10.0.20.2 remote-as 65002
neighbor 10.0.30.3 remote-as 65003
Example 1: BGP configuration on a IOS-XE device.
Would it make more sense to request the operational data for the BGP neighbors? Yes, indeed! Let’s quickly switch to the operational YANG module which is called Cisco-IOS-XE-bgp-oper and load the module. Clear the RPC filter data using the red Clear RPC(s) button before creating a new filter. Expand the bgp-state-data and the neighbors containers and click the checkbox of the neighbor list element in the value column. Generate the filter using the Build RPC button and you should get the filter as shown in screenshot 4 below.
Screenshot 4: BGP operational neighbors filter.
Run the RPC using the Run RPC(s) button. Are you now overwhelmed about the reply data which came back as shown in screenshot 5 below? Building the correct filter to get the data you want from in the RPC reply is one of the steps during this process. Try to avoid getting amounts of unneeded data back in the reply. Depending on your environment it could be tons of data you get back and you don’t need the most of it.
Screenshot 5: BGP operational neighbors full reply.
Let’s tweak the filter to minimize the data from the RPC reply as much as possible. For this expand the neighbor list element and click into the neighbor-id field and the session-state field in the value column, but leave both empty as show in screenshot 6. Clear the RPC field again and build a new RPC filter.
Screenshot 6: BGP operational neighbors filter on neighbor id and session-state.
Run the RPC again and we will get more effective reply data back as you can see from screenshot 7 below.
Screenshot 7: BGP operational neighbors reply with neighbor id and session-state.
The filter we will use later in our Python script should look like the code example below and it will force the RPC to filter on the devices BGP neighbors, but only returning the neighbor ID and the session state. It should make the reply more granular, more effective, and more consumable for us. The filter will be saved as XML file named native_bgp_neighbor_filter.xml in the filters folder.
<filter xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<bgp-state-data xmlns="http://cisco.com/ns/yang/Cisco-IOS-XE-bgp-oper">
<neighbors>
<neighbor>
<neighbor-id/>
<session-state/>
</neighbor>
</neighbors>
</bgp-state-data>
</filter>
Example 2: BGP neighbor filter based on Cisco IOS-XE Native YANG model.
Like we did for the configuration part during the previous post, choose NX-OS from YANG Set, load the OpenConfig openconfig-network-instance model, and choose the <get>
operation with Nexus1 as device. Move further down to the protocol list element under the protocols container. Expand the bgp and neighbors containers, mark the neighbor list element and click into the neighbor-address field without adding anything. Expand the state container and click into the session-state value field also without adding anything. Clear the RPC filter data using the red Clear RPC(s) button before creating a new filter using the Build RPC button.
Screenshot 8: BGP operational neighbors filter on neighbor address and session-state.
The key name with the value default for the network-instance was added automatically, so in case you need to use another one you have to specify it. Run the RPC using the Run RPC(s) button.
Screenshot 9: BGP operational neighbors reply on NX-OS.
Perfect! That looks very good. Now there is another cool thing I want to show you. Change the Device back to Router1 and run the RPC again without any other changes.
Screenshot 10: BGP operational neighbors reply on IOS-XE.
Isn’t it cool? You got also the BGP neighbors for IOS-XE Router1! Obviously you can use the filter based on OpenConfig YANG models on all supported platforms, in this case on NX-OS and IOS-XE. So with OpenConfig YANG models you could develop your filters and use it cross-platform. It is one important advantage as using OpenConfig YANG models to create filters.
The filter will be saved as XML file named openconfig_bgp_neighbor_filter.xml in the filters folder.
<filter>
<network-instances xmlns="http://openconfig.net/yang/network-instance">
<network-instance>
<name>default</name>
<protocols>
<protocol>
<bgp>
<neighbors>
<neighbor>
<neighbor-address/>
<state>
<session-state/>
</state>
</neighbor>
</neighbors>
</bgp>
</protocol>
</protocols>
</network-instance>
</network-instances>
</filter>
Example 3: BGP neighbor filter based on OpenConfig YANG model.
We built the filters for the validations and saved the files. The next task is to develop a Python script for the validation in a programmatic way.
On top of the Python script we import the modules we need and define the list of devices using the IP addresses as you can see at example 4. The manager from ncclient will be used for the connection via NETCONF as we already know from the previous post. The etree module from lxml will be used to serialize the response data to an encoded string representation of its XML tree and then we will be able to use the xml.etree.ElementTree module to browse through the XML data. More details will you find at the The ElementTree XML API documentation page which provides some easy understandable examples.
'''Python script to validate configuration from XML payload via NETCONF'''
from ncclient import manager
from lxml import etree
import xml.etree.ElementTree as ET
# List of devices
devices = ["192.168.255.51", "192.168.255.52", "192.168.255.53"]
Example 4: The top of the Python script.
In example 5 we have the device_connect function which is pretty straight forward. The function takes the dev variable which is the IP address from the devices, uses the ncclient manager to connect to the device, and returns the connection state back.
def device_connect(dev):
'''Function to connect to the devices'''
con = manager.connect(
host=dev,
username="expert",
password="1234QWer!",
hostkey_verify=False
)
return con
Example 5: The function to connect to the devices.
For getting the BGP neighbors from the devices we have the get_bgp_neighbors function which uses the connection state variable con and the NETCONF filter variable. The response data will then be converted to a XML string using the etree.tostring function and return the data.
def get_bgp_neighbors(con, filter):
'''Function to get BGP neighbors based on filter and return data'''
# Get BGP neighbor state data
response = con.get(filter=filter)
# Convert response data to XML string
data = etree.tostring(
response.data_ele,
pretty_print=True
).decode()
return data
Example 6:
Thanks to Kirk Byers IOS-XE and NETCONF Candidate Configuration Testing, Part1 blog post, especially the section for Grabbing the XML Configuration was very helpful to get an idea how to convert the XML reply data.
The first part of the main function as shown in example 7 starts with a loop iterating over the devices list and calling the device_connect function using the device IP address named device as variable. Then we need to determine which filter we want to use as variable named yang_type. In case of the NX-OS device with IP address 192.168.255.53 we use the filter from OpenConfig YANG models and for both IOS-XE devices we use the IOS-XE Native YANG model. For browsing through the XML data tree using the ElementTree XML API we also need to specify the correct URL path based on the YANG model which we will use later. Then we open the appropriate filter file from filter directory and assign it to the netconf_filter variable.
if __name__ == '__main__':
# Loop through the devices and connect to it
for device in devices:
connect = device_connect(device)
# Choose which filter and YANG model path to use
if device == "192.168.255.53":
yang_type = 'openconfig'
url = '{http://openconfig.net/yang/network-instance}'
else:
yang_type = 'native'
url = '{http://cisco.com/ns/yang/Cisco-IOS-XE-bgp-oper}'
# Open file for NETCONF filter
with open(f'filters/{yang_type}_bgp_neighbor_filter.xml') as file:
netconf_filter = file.read()
Example 7: The main function part one.
The second part of the main function as shown in example 8 starts with calling the get_bgp_eighbors function and declaring the root of the XML data from the xml_data variable. Then we can browse through the XML reply data based on the YANG model we used before.
# Get XML data and read
xml_data = get_bgp_neighbors(connect, netconf_filter)
root = ET.fromstring(xml_data)
# Browse through the XML tree and print all neighbors with state
print(f'\nBGP neighbors for {device}:')
if device == "192.168.255.53":
for neighbor in root[0][0][1][0][2][0].iter(f'{url}neighbor'):
address = neighbor[0].text
state = neighbor[1][0].text
print(f'Neighbor {address} -> {state}')
else:
for neighbor in root[0][0].iter(f'{url}neighbor'):
address = neighbor[0].text
state = neighbor[1].text
print(f'Neighbor {address} -> {state}')
Example 8: The main function part two.
We need to climb down the latter of XML data, in case of the NX-OS device with IP address 192.168.255.53 the neighbors we need to iterate over are located at root[0][0][1][0][2][0]
. The variable for address can then be assigned using the neighbor[0].text
because it is the first element in the neighbor list. Same goes for the state where we need to use the second element from the neighbor and the first element from the state which is then neighbor[1][0].text
. In screenshot 11 you will see how we get there.
Screenshot 11: BGP Neighbors from root XML data reply for OpenConfig YANG.
In case of the Native YANG model and the IOS-XE devices the XML data reply tree is a little bit easier to climb down as you can see from screenshot 12. The neighbors are located at located at root[0][0]
with the neighbor as the same as for NX-OS with neighbor[0].text
and the state as neighbor[1].text
both on the same level.
Screenshot 12: BGP Neighbors from root XML data reply for Native YANG.
Alright, we are ready to move on with the final part, bring everything together, and validate the BGP neighbors.
Let’s quickly recap what we have now. The complete Python script was saved as netconf_validate.py in the main directory of the repository. Before that we created the filters which are saved as native_bgp_neighbor_filter.xml from example 2 for the IOS-XE Native YANG model and openconfig_bgp_neighbor_filter.xml from example 3. Both filter files are located at the filters directory.
Screenshot 13: GitHub repository files.
Please make sure to follow the instructions from the GitHub repository netconf-example before running the validation script. At this time the configuration was already applied as described on the previous blog post NETCONF XML Payload with YANG models.
Let’s run the Python script and validate the BGP neighbors:
(venv) $ python netconf_validate.py
BGP neighbors for 192.168.255.51:
Neighbor 10.0.10.2 -> fsm-established
Neighbor 10.0.20.2 -> fsm-established
Neighbor 10.0.30.3 -> fsm-established
BGP neighbors for 192.168.255.52:
Neighbor 10.0.10.1 -> fsm-established
Neighbor 10.0.20.1 -> fsm-established
Neighbor 10.0.40.3 -> fsm-established
BGP neighbors for 192.168.255.53:
Neighbor 10.0.30.1 -> ESTABLISHED
Neighbor 10.0.40.2 -> ESTABLISHED
Example 9: Validating the BGP neighbors.
Excellent! All devices showing established sessions between their BGP neighbors. I hope it was again easy to follow and to replicate on your own setup. Here are the links to the previous blog post NETCONF XML Payload with YANG models and the GitHub repository netconf-example with all files again. Please let me know using Github issues if you faced into any issues with the setup or if you found any errors.
Thank you for reading this blog post and following along until the end. Please leave a feedback in the comments!
]]>The Network Configuration Protocol (NETCONF) provides mechanisms to install, manipulate, and delete the configuration of network devices. It uses XML-based (Extensible Markup Language) data encoding for the configuration data and the protocol messages. A secure, connection-oriented session is established using remote procedure calls (RPC) between a client which is your workstation for example and a server which is the network device. The NETCONF protocol provides a set of operations to manage device configurations (get-config, edit-config, copy-config, delete-config) and retrieve device state data (get). As the CLI was made for humans interacting with the devices, NETCONF was made for machines interacting with machines. For more detailed information about NETCONF I encourage you to read through RFC6241 which is well written with good examples.
A big advantage of using NETCONF to manage your network device configuration is the transactional behavior. Let’s assume you’re going to configure a network device manually with a prepared configuration part in a text editor to be copied and pasted into the CLI like we all were doing in the past. When there is an error in your configuration while pasting it to the CLI of the device you will end up with a partial configuration which could case serious issues.
Nexus1# conf t
Enter configuration commands, one per line. End with CNTL/Z.
Nexus1(config)# int ethernet1/10
Nexus1(config-if)# ip address 10.0.50.3/24
^
% Invalid command at '^' marker.
Nexus1(config-if)#
Example 1: Pasting commands from a text editor with error.
NETCONF’s transactional behavior ensures that the configuration will only be applied when every bit of it is correct. As soon as there’s a wrong bit which is not accepted by the device, the whole configuration you wanted to be applied is refused. This mechanism provides a robust and resilient way to configure devices.
As described before, in this blog post I will focus only on a small part of NETCONF which is the creation of proper XML configuration data to be used with the edit-config operation to push the configuration to the device. I will show you the following three different options for the XML payload:
✅ IOS-XE config with Native YANG
✅ IOS-XE config with OpenConfig YANG
✅ NX-OS config with Native + OpenConfig YANG
For the NETCONF communication between the client (my workstation) and the server (the network devices) I will take advantage of the Python library ncclient. The following diagram shows the lab setup in Cisco Modeling Labs I am using for the example.
Screenshot 1: NETCONF Lab overview in Cisco Modeling Labs.
For the initial lab setup please refer to my DevNet Expert Lab on Cisco Modeling Labs GitHub repository. There are only the management interfaces and some basic features configured, everything else like the interfaces with IP addressing and BGP configuration will be done via NETCONF.
Before communicating with the devices using NETCONF, the NETCONF Agent must be enabled. The NETCONF Agent is enabled or disabled by entering the netconf-yang
command on IOS-XE and feature netconf
command on NX-OS. Additionally you need to enable OpenConfig on NX-OS using the feature openconfig
command. Make sure you have users with appropriate privileges configured on your devices that is privilege level 15 on IOS-XE and network-admin or dev-ops role on NX-OS. You need connectivity from your workstation via SSHv2 on TCP port 830. NETCONF does not support SSH version 1. That’s all part of the initial lab configuration in my example.
The XML configuration data, also named payload, will be created with the help of Cisco YANG Suite. It provides a set of tools and plugins to learn, test, and adopt YANG programmable interfaces such as NETCONF, RESTCONF, gNMI and more. I am using the DevNet Expert Candidate Workstation VM on which Cisco YANG Suite is already installed and is available on http://localhost:8480. For installation option please refer to the Cisco YANG Suite documentation.
If it is your first time running YANG Suite, you should start at the Getting Started section from the documentation and create Device profiles and download the supported YANG models from the devices or upload YANG model files from your workstation or from public YANG Github repository at the Setup menu on the right. After that you can create YANG module sets for IOS-XE and NX-OS with the supported modules you need.
Screenshot 2: Module Sets in YANG Suite.
Now we can take a closer look how to create the XML payload and use these models to send NETCONF at Protocols –> NETCONF in YANG Suite.
Let us start with creating the XML configuration payload for the IOS-XE Router1 using the Native YANG model. Go to Protocols –> NETCONF in YANG Suite, select IOS-XE on the YANG Set from the Dropdown menu, search and select the Cisco-IOS-XE-native module and load the modules. Then choose the NETCONF operation, <edit-config>
in our case, and select Router1 as device. Before we start browsing through the YANG tree, click on the “YANG Tree -> Options” and choose NETCONF XML (RPC parameters only) from Display as RPC(s) as to show only the configuration payload parameters.
Screenshot 3: YANG Tree Settings in YANG Suite.
Now we can start browsing the YANG tree and look for the interfaces container. Unfortunately there is no option to sort the YANG tree in alphabetical order, therefore I recommend using the search option from the browser. Expand the interfaces container, look for the GigabitEthernet list element and expand it. The small key symbol at the name leaf tells you that this leaf is the key of the list item and mandatory. Add the interface number 2 in the value field of the name. Another important setting is the shutdown leaf which has only a checkbox. Check the box in the Value column and add the remove operation in the Operation column. This setting enables the interface and is the equivalent to the no shutdown CLI command.
Screenshot 4: GigabitEthernet list from interface container of the Cisco-IOS-XE-native module.
Scroll further down to the ip container, expand it, and then choose address-choice -> address -> address -> address-choice -> fixed case -> primary. Add the IP address 10.0.10.1 at the address leaf and the subnet mask 255.255.255.0 at the mask leaf. After that click Build RPC and the XML config payload is generated on the right side.
Screenshot 5: First XML config payload for interfaces from the Cisco-IOS-XE-native module.
You could use YANG Suite now to do a RPC to Router1 in a new browser tab and send the payload when clicking on Run RPC(s). But for now we move on to the BGP configuration part which can be found at the router container and the ios-bgp:bgp list element. The leaf ios-bgp:id is the key and it represents the AS number 65001 in our case. Expand the list element ios-bgp:neighbor, add the neighbor ID 10.0.10.2 as value of the ios-bgp:ip leaf item, and add 65002 to the ios-bgp:remote-as leaf item. Click the red Clear RPC(s) button to clear the XML payload section before clicking Build RPC(s) again otherwise the new added XML payload data will be added to the existing data instead of re-creating the whole thing.
Screenshot 6: All XML config payload parts from the Cisco-IOS-XE-native module.
Now we have all XML payload configuration parts ready. Then we can replicate what we have put together so far for the other interfaces and BGP neighbors to create the full XML configuration payload for Router1:
<config xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<native xmlns="http://cisco.com/ns/yang/Cisco-IOS-XE-native">
<interface>
<GigabitEthernet>
<name>2</name>
<shutdown xmlns:nc="urn:ietf:params:xml:ns:netconf:base:1.0" nc:operation="remove"/>
<ip>
<address>
<primary>
<address>10.0.10.1</address>
<mask>255.255.255.0</mask>
</primary>
</address>
</ip>
</GigabitEthernet>
<GigabitEthernet>
<name>3</name>
<shutdown xmlns:nc="urn:ietf:params:xml:ns:netconf:base:1.0" nc:operation="remove"/>
<ip>
<address>
<primary>
<address>10.0.20.1</address>
<mask>255.255.255.0</mask>
</primary>
</address>
</ip>
</GigabitEthernet>
<GigabitEthernet>
<name>4</name>
<shutdown xmlns:nc="urn:ietf:params:xml:ns:netconf:base:1.0" nc:operation="remove"/>
<ip>
<address>
<primary>
<address>10.0.30.1</address>
<mask>255.255.255.0</mask>
</primary>
</address>
</ip>
</GigabitEthernet>
</interface>
<router>
<bgp xmlns="http://cisco.com/ns/yang/Cisco-IOS-XE-bgp">
<id>65001</id>
<neighbor>
<id>10.0.10.2</id>
<remote-as>65002</remote-as>
</neighbor>
<neighbor>
<id>10.0.20.2</id>
<remote-as>65002</remote-as>
</neighbor>
<neighbor>
<id>10.0.30.3</id>
<remote-as>65003</remote-as>
</neighbor>
</bgp>
</router>
</native>
</config>
Example 2: Complete XML config payload for Router1 from IOS-XE native model.
So far so good. The complete XML config payload for Router1 from IOS-XE native models is ready. Now let us move on to the XML configuration payload for Router2, but this time we will use the OpenConfig YANG model.
For the usage of OpenConfig YANG modules you need to choose all the single modules you need to build XML payload for and not only the single native module like we did for Router1 before. We select IOS-XE on the YANG Set from the Dropdown menu, but this time we search and select the following OpenConfig YANG modules:
You can find out how the various modules are linked together by using right click on the mouse while on a module in the YANG tree and choose properties. There is a lot of useful information about the modules and which other submodules linked to it. The same option is available from left menu at Explore -> YANG where you can load the individual modules and explore the details.
Screenshot 5: OpenConfig YANG Module properties
Load the modules and make sure you selected the NETCONF operation <edit-config>
. You do not need to select a device as we will not do NETCONF RPC from YANG Suite for now. Expand the openconfig-interface module as well as the interfaces container and the interface list element. Add the name GigabitEthernet2 which is the key of the list element like it was for the native model. Expand the config container, add gain the interface name, choose ianaift:ethernetCsmacd for the type leaf, and select true for the enabled leaf element.
Screenshot 6: OpenConfig YANG Interfaces
The IP address configuration is a little bit hided in the OpenConfig YANG models. Scroll down to the subinterfaces container, expand it, then expand the subinterface list element. The key index needs to be 0 in the case that you do not have a subinterface which is some kind of confusing but the IP addresses need to be set there. Move on expand oc-ip:ipv4 -> oc-ip:addresses -> oc-ip:address and add 10.0.10.2 as oc-ip:ip. Expand the oc-ip:config container, add the IP address at oc-ip:ip again, and add 24 as oc-ip:prefix-length. Make sure that you have cleaned the XML payload section before you hit Build RPC(s) to take a look what we have so far.
Screenshot 7: OpenConfig YANG Interfaces IP settings
Let us move to the openconfig-network-instance model to create the configuration for BGP on Router2. Expand the network-instances container and the network-instance list element. The network instances are the layer-2, layer-3, or layer-2+layer-3 forwarding instances on a NX-OS device. In our case we are using the default network instance as name which is the key of the list. As the default network instance is already present on the device we do not need to use the config container in the network-instance and can move on to the protocols container.
Screenshot 8: OpenConfig YANG Network Instances
Expand the protocols container and the protocol list element. Now it is getting a little bit more complicated. The identifier of the protocol needs to be added as oc-pol-types:INSTALL_PROTOCOL_TYPE which is in our case oc-pol-types:BGP. Click into the value field before and then you see that there is a reference down to the config container, expand it, and you have a drop down where you can see the valid values to choose from.
Screenshot 9: OpenConfig YANG Protocols in Network Instances
As mentioned before we use oc-pol-types:BGP as identifier in the protocol list and in the config container. We do the same with 65002 for the name which is a unique name for the protocol instance. You can add whatever string you want but for better correlation I decided to use the BGP AS number.
Screenshot 10: OpenConfig YANG BGP Protocol in Network Instances
Scroll further down to the bgp container and expand it. Expand the global container as well and add the as number 65002 and the router-id of 2.2.2.2. The move on to the neighbor container, expand it, and under the neighbor list element add the neighbor-address of Router1 which is 10.0.10.1. Add the same value under the config container at the neighbor-address and use 65001 as peer-as which is the equivalent of the remote-as on the neighbor statement on the CLI.
Screenshot 11: OpenConfig YANG BGP Config
Remember to clear the XML payload section using the Clear RPC(s) button and then hit Build RPC(s). Similar as we did for Router1, we are able to replicate the XML payload for the other interfaces and BGP neighbors to complete the full XML payload for Router2. It should then look like example 3 below. Now there is only the NX-OS device Nexus1 left.
<config xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<interfaces xmlns="http://openconfig.net/yang/interfaces">
<interface>
<name>GigabitEthernet2</name>
<config>
<name>GigabitEthernet2</name>
<type xmlns:ianaift="urn:ietf:params:xml:ns:yang:iana-if-type">ianaift:ethernetCsmacd</type>
<enabled>true</enabled>
</config>
<subinterfaces>
<subinterface>
<index>0</index>
<ipv4 xmlns="http://openconfig.net/yang/interfaces/ip">
<addresses>
<address>
<ip>10.0.10.2</ip>
<config>
<ip>10.0.10.2</ip>
<prefix-length>24</prefix-length>
</config>
</address>
</addresses>
</ipv4>
</subinterface>
</subinterfaces>
</interface>
<interface>
<name>GigabitEthernet3</name>
<config>
<name>GigabitEthernet3</name>
<type xmlns:ianaift="urn:ietf:params:xml:ns:yang:iana-if-type">ianaift:ethernetCsmacd</type>
<enabled>true</enabled>
</config>
<subinterfaces>
<subinterface>
<index>0</index>
<ipv4 xmlns="http://openconfig.net/yang/interfaces/ip">
<addresses>
<address>
<ip>10.0.20.2</ip>
<config>
<ip>10.0.20.2</ip>
<prefix-length>24</prefix-length>
</config>
</address>
</addresses>
</ipv4>
</subinterface>
</subinterfaces>
</interface>
<interface>
<name>GigabitEthernet4</name>
<config>
<name>GigabitEthernet4</name>
<type xmlns:ianaift="urn:ietf:params:xml:ns:yang:iana-if-type">ianaift:ethernetCsmacd</type>
<enabled>true</enabled>
</config>
<subinterfaces>
<subinterface>
<index>0</index>
<ipv4 xmlns="http://openconfig.net/yang/interfaces/ip">
<addresses>
<address>
<ip>10.0.40.2</ip>
<config>
<ip>10.0.40.2</ip>
<prefix-length>24</prefix-length>
</config>
</address>
</addresses>
</ipv4>
</subinterface>
</subinterfaces>
</interface>
</interfaces>
<network-instances xmlns="http://openconfig.net/yang/network-instance">
<network-instance>
<name>default</name>
<protocols>
<protocol>
<identifier xmlns:oc-pol-types="http://openconfig.net/yang/policy-types">oc-pol-types:BGP</identifier>
<name>65002</name>
<config>
<identifier xmlns:oc-pol-types="http://openconfig.net/yang/policy-types">oc-pol-types:BGP</identifier>
<name>65002</name>
</config>
<bgp>
<global>
<config>
<as>65002</as>
<router-id>2.2.2.2</router-id>
</config>
</global>
<neighbors>
<neighbor>
<neighbor-address>10.0.10.1</neighbor-address>
<config>
<neighbor-address>10.0.10.1</neighbor-address>
<peer-as>65001</peer-as>
</config>
</neighbor>
<neighbor>
<neighbor-address>10.0.20.1</neighbor-address>
<config>
<neighbor-address>10.0.20.1</neighbor-address>
<peer-as>65001</peer-as>
</config>
</neighbor>
<neighbor>
<neighbor-address>10.0.40.3</neighbor-address>
<config>
<neighbor-address>10.0.40.3</neighbor-address>
<peer-as>65003</peer-as>
</config>
</neighbor>
</neighbors>
</bgp>
</protocol>
</protocols>
</network-instance>
</network-instances>
</config>
Example 3: Complete XML config payload for Router2 from OpenConfig YANG models.
For the NX-OS device I had several challenges during testing. Initially I wanted to use only the OpenConfig YANG modules to create the XML payload for the configuration. It quickly turned out that not all functions of the CLI were implemented in the OpenConfig YANG model for NX-OS, for example it is not possible to change an interface from a layer-2 to a layer-3 interface. On the CLI you would simply enter no switchport at the interface configuration and that’s it. So I decided to use a combination of both YANG modules, the Native Cisco-NX-OS-device module and the OpenConfig modules. Furthermore I thought it would be a good idea to keep the configuration parts separated in to single files and send it later via NETCONF sequentially to the NX-OS device.
Before being able to configure an IP address on an NX-OS interfaces we need to make it a layer-3 interface as described above. For this configuration we have to use the Native Cisco-NX-OS-device. Load the modules as we did it before and search the YANG module for the intf-items container with the phys-items sub-container. Expand the PhysIf-list list item and enter eth1/1 as key id. Scroll further down for the layer leaf and choose Layer3 from the dropdown.
Screenshot 12: Native Cisco-NX-OS-device physical interface
Build the RPC as we did before, replicate the configuration for eth1/2**, and save the XML payload as *nxos_native_interfaces.xml file. This is the first configuration part we will send to the NX-OS device. The XML payload should look like the example 4 below.
<config xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<System xmlns="http://cisco.com/ns/yang/cisco-nx-os-device">
<intf-items>
<phys-items>
<PhysIf-list>
<id>eth1/1</id>
<layer>Layer3</layer>
</PhysIf-list>
<PhysIf-list>
<id>eth1/2</id>
<layer>Layer3</layer>
</PhysIf-list>
</phys-items>
</intf-items>
</System>
</config>
Example 4: Physical interface XML config payload for Nexus1 from Native Cisco-NX-OS-device.
Then the next step is to create the configuration part for IP addressing on the interfaces. We already used the OpenConfig modules for the interface IP addressing on Router2, so we can use it as example and do not need to browse through the YANG model again and out the configuration together. Copy the XML payload for the interfaces from Router2 and replace the interfaces and IP addresses. Save the XML payload as nxos_openconfig_interfaces.xml file and it should look like example 5 below.
<config xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<interfaces xmlns="http://openconfig.net/yang/interfaces">
<interface>
<name>eth1/1</name>
<config>
<name>eth1/1</name>
<type xmlns:ianaift="urn:ietf:params:xml:ns:yang:iana-if-type">ianaift:ethernetCsmacd</type>
<enabled>true</enabled>
</config>
<subinterfaces>
<subinterface>
<index>0</index>
<ipv4 xmlns="http://openconfig.net/yang/interfaces/ip">
<addresses>
<address>
<ip>10.0.30.3</ip>
<config>
<ip>10.0.30.3</ip>
<prefix-length>24</prefix-length>
</config>
</address>
</addresses>
</ipv4>
</subinterface>
</subinterfaces>
</interface>
<interface>
<name>eth1/2</name>
<config>
<name>eth1/2</name>
<type xmlns:ianaift="urn:ietf:params:xml:ns:yang:iana-if-type">ianaift:ethernetCsmacd</type>
<enabled>true</enabled>
</config>
<subinterfaces>
<subinterface>
<index>0</index>
<ipv4 xmlns="http://openconfig.net/yang/interfaces/ip">
<addresses>
<address>
<ip>10.0.40.3</ip>
<config>
<ip>10.0.40.3</ip>
<prefix-length>24</prefix-length>
</config>
</address>
</addresses>
</ipv4>
</subinterface>
</subinterfaces>
</interface>
</interfaces>
</config>
Example 5: IP addressing XML config payload for Nexus1 from OpenConfig modules.
Last but not least we create the XML payload for the BGP configuration including neighbor statements using the OpenConfig modules. As we did before we could copy the example from Router2 and use it here, but there are some differences in the NX-OS implementation of the openconfig-network-instance module compared to IOS-XE. The first thing is the name leaf at the protocol list element under the protocols container. In the NX-OS implementation it has to match the protocol name you want to use, in our case bgp. The second thing is the address family specific configuration per neighbor at the afi-safis container under the neighbor list element which needs to be added on NX-OS. We have to add oc-bgp-types:IPv4_UNICAST as afi-safi-name key and under the config container for each neighbor statement. Example 6 shows you the complete BGP XML payload.
<config>
<network-instances xmlns="http://openconfig.net/yang/network-instance">
<network-instance>
<name>default</name>
<protocols>
<protocol>
<name>bgp</name>
<identifier xmlns:oc-pol-types="http://openconfig.net/yang/policy-types">oc-pol-types:BGP</identifier>
<config>
<identifier xmlns:oc-pol-types="http://openconfig.net/yang/policy-types">oc-pol-types:BGP</identifier>
<name>bgp</name>
</config>
<bgp>
<global>
<config>
<as>65003</as>
<router-id>3.3.3.3</router-id>
</config>
</global>
<neighbors>
<neighbor>
<neighbor-address>10.0.30.1</neighbor-address>
<config>
<neighbor-address>10.0.30.1</neighbor-address>
<peer-as>65001</peer-as>
</config>
<afi-safis>
<afi-safi>
<afi-safi-name xmlns:oc-bgp-types="http://openconfig.net/yang/bgp-types">oc-bgp-types:IPV4_UNICAST</afi-safi-name>
<config>
<afi-safi-name xmlns:oc-bgp-types="http://openconfig.net/yang/bgp-types">oc-bgp-types:IPV4_UNICAST</afi-safi-name>
</config>
</afi-safi>
</afi-safis>
</neighbor>
<neighbor>
<neighbor-address>10.0.40.2</neighbor-address>
<config>
<neighbor-address>10.0.40.2</neighbor-address>
<peer-as>65002</peer-as>
</config>
<afi-safis>
<afi-safi>
<afi-safi-name xmlns:oc-bgp-types="http://openconfig.net/yang/bgp-types">oc-bgp-types:IPV4_UNICAST</afi-safi-name>
<config>
<afi-safi-name xmlns:oc-bgp-types="http://openconfig.net/yang/bgp-types">oc-bgp-types:IPV4_UNICAST</afi-safi-name>
</config>
</afi-safi>
</afi-safis>
</neighbor>
</neighbors>
</bgp>
</protocol>
</protocols>
</network-instance>
</network-instances>
</config>
Example 6: BGP XML config payload for Nexus1 from OpenConfig modules.
Now we are ready tp push the XML payload configurations using the Python library ncclient. I have prepared a script for this which loops through the list of devices, opens the XML payload for the configuration, and sends it to the device using NETCONF with the <edit-config>
operation. There is a list of configurations for the NX-OS device Nexus1 which will be used in an additional loop and sends all three XML payloads sequentially to the device. Should be pretty straight forward as you can see in example 7 below.
'''Python script to push XML configuration payload to devices via NETCONF'''
import logging
from ncclient import manager
logging.basicConfig(level=logging.DEBUG)
# List of devices IOSXE devices
devices = ["192.168.255.51", "192.168.255.52", "192.168.255.53"]
# List of NX-OS device configurations
nxos_configs = ["native_interfaces", "openconfig_interfaces", "openconfig_bgp"]
# Loop through the devices and connect to it
for device in devices:
router = manager.connect(
host=device,
username="expert",
password="1234QWer!",
hostkey_verify=False
)
# For the NX-OS device
if device == "192.168.255.53":
# Loop through the NXOS configs and apply it
for config in nxos_configs:
with open(f"nxos_{config}.xml", encoding="utf-8") as file:
payload = file.read()
router.edit_config(payload, target="running")
# For all other devices
else:
with open(f"{device}.xml", encoding="utf-8") as file:
payload = file.read()
router.edit_config(payload, target="running")
Example 7: Python script to send the configuration to the devices.
As you might saw I imported the logging module and sat the logging level to DEBUG with logging.basicConfig(level=logging.DEBUG)
. It could be commented in the code but for troubleshooting it is very helpful. Below in example 8 there is an error message from the debug messages which shows you exactly what was wrong with your XML payload.
<rpc-reply xmlns="urn:ietf:params:xml:ns:netconf:base:1.0" message-id="urn:uuid:a3bdb31a-db79-46f1-9d6e-c915b94777a8">
<rpc-error>
<error-type>protocol</error-type>
<error-tag>invalid-value</error-tag>
<error-severity>error</error-severity>
<error-message xml:lang="en">List Merge Failed: [ERR] Invalid DN sys/isis/inst, wrong rn prefix inst at position 13</error-message>
<error-path>/network-instances/network-instance/protocols/protocol/name</error-path>
</rpc-error>
</rpc-reply>
Example 8: NETCONF RPC reply error message.
Not every time are the error messages that helpful and you have to dig into the YANG models to find out what was wrong. I can tell you that I went through a lot of trial & error testing phases. In our case everything went well and we got an <ok/>
message back from the devices which looks like the one in example 9.
<?xml version="1.0" encoding="UTF-8"?>
<rpc-reply xmlns="urn:ietf:params:xml:ns:netconf:base:1.0" message-id="urn:uuid:42aeb554-42a7-4dc6-abb3-47776dc9e635">
<ok/>
</rpc-reply>
Example 9: NETCONF RPC reply ok message.
Now there is only one thing left which is the validation of our changes. We know that the NETCONF operations via the Python script using ncclient went well but is the configuration working on the devices? We could login to the device and quickly check it manually, but we could also use YANG Suite to validate our changes. Let me show you how it works for IOS-XE for example. Load the Native Cisco-IOS-XE-bgp-oper model and choose the <get>
operation with the Router1 as device. Expand the bgp-state-data container and the neighbors container. Mark the checkbox at the neighbor list and click into the value field of the neighbor-id wihtout entering anything. Then build the XML payload using the Build RPC(s) button and run it with Run RPC(s).
Screenshot 13: BGP neighbor validation using Cisco-IOS-XE-bgp-oper model
A new browser tab will be opened where you can follow the complete RPC with all details similar to the debug from the Python script. You can see the successful RPC reply message showing the neighbors from Router1.
Screenshot 14: BGP neighbor validation reply message.
For the OpenConfig YANG models there are no operation models like for the Native models and you need to use the same models as for the configuration parts, but then for the state data. Load the OpenConfig openconfig-network-instance model, but leave the <get>
operation and Router1 as device. Add the default value to the name leaf of the network-instance container and then move further down to the protocol list element under the protocols container. Expand the bgp and neighbors containers, mark the neighbor list element and click into the neighbor-address field without adding anything. Expand the state container and click into the neighbor-address value field also without adding anything.
Screenshot 15: BGP neighbor validation using OpenConfig model
Clear the XML payload section using Clear RPC(s) again and create a new RPC using the Build RPC(s) button and it should look like the screenshot 15 above. Run the RPC with Run RPC(s) and you should get the same RPC reply message from Router1 as show in screenshot 16. The results are the same, but this time with the OpenConfig YANG model.
Screenshot 16: BGP neighbor validation reply message with OpenConfig.
The next steps could be to build another Python script using NETCONF with the <get>
operation to validate the changes in a programmatic way, but this is something for the another blog post.
I hope it was easy to follow and to replicate on your own setup. You can find all files from the examples used in this post in my GitHub repository netconf-example. If you face into any issues with the setup or if you found any errors please let me know and/or leave a comment using the Github issues.
Thank you for reading this blog post and following along until the end. Please leave a feedback in the comments!
]]>Cisco Live is all about the people and the people make this event to an awesome event. You will meet friends, coworkers, or ex-coworkers which you haven’t see for some time. You will meet virtual friends from several social media platforms again or for the first time which is a great experience getting to know each other face to face rather than online. You will make new friends and get to know new business partners and much more. Don’t miss the chance to book a Meet the Engineer (MTE) to connect and exchange with outstanding tech people from Cisco. The opportunity to meet and connect with people face to face is an underestimate value in live which can’t be compensated using online meetings. Attend Cisco Live because of the people!
Breakouts, Broadcasts, Customer Success Stories, Demos, DevNet, Instructor-led Labs, IT Leadership, Innovation Talks, Keynotes, Partner Case Studies, Techtorial, Workshops or whatever session you booked, in very most cases you will not be disappointed about the content delivered to you. For the sessions I attended, I could see a slightly increase in the quality of contents and how the presentations were being held. It seemed to me that they brought a lot of the feedback from the attendees into the sessions and therefore it is important to provide an honest feedback at the end of each session. The same increase was for the Instructor-led Lab sessions. I would like to highlight and recommend five sessions I attended. Every session is from a different type to show you an example of what type of sessions are available:
BRKATO-2106 - Ansible Network Automation, GitOps for NetOps
A presentation style session type with a duration between 45-90 minutes. In this case it was a 60 minutes deep dive into Ansible used within GitOps for NetOps. The session was a split presentation between between Sean Cavanaugh from RedHat and Adrian Iliesiu from Cisco DevNet. They showed a simple network as code example where a Git repository was the Source of Truth (SoT). Pull requests started the automation workflow for applying changes. Very cool and a really recommended session for all NetDevOps geeks.
DEVNET-3008 - Extending CML: Terraforming the Lost City
A 20 or 45 minutes DevNet presentation session. Quinn Snyder took us into the Lost City and showed us how to use Terraform and Atlantis how to track changes in Pull-Requests for review and approve. It was very cool session which shows once more how important it is to use the tools to build processes within the automation path on your network.
DEVWKS-2033 - Hands-on HVAC Python SDK!
A 20 or 45 minutes DevNet workshop session with hands-on following a step-by-step lab guide while Kareem Iskander guided us through it. Hashicorp Vault is a really great tool to manage your app secrets or credentials. Its API is easy to consume. There is a similar learning lab called Securing your API Token with Vault and Github repository for Securing your API authentication keys with Vault available.
CISCOU-3000 - The Future of Network Operations in the Age of Artificial Intelligence
A 20 to 45 minutes session presentation at the Cisco U. theater held at the Cisco Learning & Certification booth. The magician John Capobianco brought his assistant ChatGBT and showed us some magic tricks how AI could potentially support with network monitoring and troubleshooting. You can watch all session recordings on the Cisco U. YouTube channel and I would subscribe to the channel if you do not want to miss upcoming sessions.
LTRCRT-3100 - Building Python Applications for DevNet Professional and Expert Candidates
A four hour instructor-led lab session including a short introduction and then hands-on experiences till the end. It is an extra paid session which need to be booked in advance, but I promise it will bring you a really good value. Hank Preston and Akhila Pamukuntla provided an awesome lab with three challenges where we had to build Python applications. I can highly recommend this lab for all DevNet Professional and Expert candidates.
Please click on the following link to find out more about the different session types at Cisco Live. Most of the Cisco Live sessions are available on-demand on the Cisco Live website. I tried to add the links for the sessions mentioned above as they are available of writing this post.
The Cisco Learning & Certification booth is the best place to meet and connect with your Cisco Learning Network community peers. Share and exchange your knowledge about your certification journeys. Hang out at the Cisco U. theater and attend really great sessions with the change to win one of the many raffles. Get in touch with the incredible Learning & Certification team, ask your questions, and get the latest updates. Last but not least, don’t miss to take a look at the latest version of Cisco Modeling Labs. It is always worth to visit the booth and spend some time there.
Apart from the session presentations and workshops, Cisco’s developer community is present in the DevNet Zone. If you want to learn, code, inspire an connect, this is the right place where you can innovate with Cisco technologies and platforms. Check out Five Reasons You Need To Visit the DevNet Zone Every Day of Cisco Live and add it to my six reasons for next year. As I am preparing for the DevNet Expert lab exam right now, I spent a lot of time here and attended many DevNet sessions and workshops.
The World of Solutions is the place for Cisco, partners, learning, networking – all in one place. You can explore 500,000 square feet of interaction and engagement from Cisco products and technologies and other leading technology companies. There are presentations, live hands-on demos, raffles, quizzes, and many more to discover. One highlight was the Cisco football stadium booth and the opportunity to take a picture with the NFL Super Bowl trophy, the Vince Lombardi Trophy.
The party is final the highlight after an intense week full of learning. Enjoy good music and performances from extraordinary artists while eating and drinking. Calm down and reflect the past days before the final day and Cisco Live closes the doors until next time. This year we were lucky to visit the home of Las Vegas Raiders football team, the Allegiant stadium. We also have Gwen Stefani and Blake Shelton performing live on stage. What a great night!
No matter which Cisco Live you can attend, it is always worth it. There is no question about it. The people you meet and the content you get delivered to learn is overwhelming and priceless. I hope the above mentioned reasons will help you decide whether to attend Cisco Live or even help convince your manager. You might leave out the party when you propose to your boss, but other than that there are certainly good reasons to attend. Hope to see you in Amsterdam or Las Vegas next year and feel free to use the comment section to leave a comment or even add your personal reasons to attend.
The three different subscription plans Free, Essentials, and All Access and they are obviously slightly different. All Access is pretty clear, it gives you access to the full feature set and all digital content available. A price of $6,000 ($4,800 with current discount) or 48 Cisco Learning Credits (CLCs) seems a proud price. The Essentials plan takes only one feature away which is the All self-paced hands-on labs and but keeps the Essential self-paced hands-on labs feature which could be good enough for many students.
Depending on your knowledge level, learning and certification goals the major drawback will be the missing Professional-level certification Learning Paths content within the Essentials subscription. More or less important in this context could also be the removed Cisco product Learning Paths and Cisco solution Learning Paths. In my opinion the Essentials subscription looks good and with a regular price of $1,800 ($1,500 with current discount) per year seems to be a fair price considering the amount of content you get. You could also use 15 Cisco Learning Credits (CLCs) to purchase it.
The Free plan consists the same features from the Essentials except the Essential self-paced hands-on labs and the digital content is limited to Podcasts, Webinars, Tutorials, and Videos. The missing opportunity to do self-paced labs could be a exclusion criterion, but one key point about the Free plan is the opportunity to earn Continuing Education (CE) credits from the content provided. Today it is really important to stay relevant in the industry and maintain your Cisco certifications.
I saw some critical discussions on Social Media about the prices of the subscriptions plans which I can understand to a certain point. The All Access plan seems too expensive for individuals in particular for young people or juniors who are at the beginning of their career. But the price is doable for companies supporting their employees. As I already wrote before, the Essentials plan has a fair price and I think investing $125-150 a month in yourself should be possible. It is an invest in yourself and your career. Many people often forget what an amount of work is behind providing high quality learning and training content. Preparing content with examples, building labs, and recording videos is a time consuming thing. Everyone who has ever prepared a training session knows how much time and work you need to put in to create good content which brings a value to the attendees. The Cisco Learning & Certification team did a really good job with Cisco U. providing high quality courses, learning paths, or self-paced hands-on labs.
However there is always room for improvements, especially the user interface could be improved while you are in a course. Forward and backward navigation through the sections and subsections of a course and returning back to the course overview seems to me not quite intuitive. After spending some time with a course it’s getting better because I have arranged with the navigation. The courses itself I was going through were a mixture between pre- and post-assessment questions, videos, reading, and hands-on labs which is a pretty good combination to learn in general. The pre-assessment tests at each section will check your knowledge about the contents. Based on your results Cisco U. recommends which sections you could skip because you have proficient knowledge and which sections you should work on. At the end of each subsection there are also some content review questions. The self-paced hands-on labs provide a virtual lab environment similar from the Cisco Learning Network Store courses including an introduction video and detailed step-by-step guidance through the the lab. After each main section there is a post-assessment to verify your knowledge again providing a score report of the subsections. I am not 100% sure if the pre- and post-assessments contain the same questions but it seems so. I would like to have the option to mark questions during the assessments for later review. The detailed score reports gives a good understanding of what you need to study more and which domain but there is no option to review the single questions.
Cisco U. provides high quality learning content and brings a lot of value to the students in my opinion. The courses vary from technology categories like Software Networking, Security, Cloud and Computing, Security, Data Center, Mobility and Wireless, and others. Besides courses for Cisco certifications and products there are also Microsoft, AWS, and vendor independent trainings. The close linking to the Cisco Learning Network community is another great benefit to stay in contact with study peers. The opportunity to ask for help, share your knowledge and experience from courses, labs or learning paths, and help others when running into problems, shows again that learning is easier with a collaborative community. I hope you like my quick review post about Cisco U. and please leave a comment if you like. Thank you for reading!
During the first part we focused on section 4.1 from the exam blueprint which is about creating a Docker image using Dockerfile. We ignored Docker networking to keep it simple, but maybe it was not a good idea to put all Docker containers into the same network and also using the Docker default bridge network. We will take a closer look at Docker networking now according to section 4.4. which is called “Create, consume, and troubleshoot a Docker host and bridge-based networks and integrate them with external networks”. Sounds interesting, right? I will show you how to expand our application framework example using the benefits of Docker networking.
I am still using a simple lab setup in Cisco Modeling Labs (CML) with a Ubuntu 20.04 machine as devbox running Docker and external connectivity. My lab topology file is available here for download and import into CML. You could also use the official Candidate Workstation available for download on the Cisco Learning Network.
Hope you are also excited about this second part of the Containers series like I am and you follow my journey towards the Cisco Certified DevNet Expert. Let’s start!
We will slightly change our design and create a frontend and a backend network in Docker to separate the containers. It is meaningful to keep different parts of functions isolated from each other and also control external connectivity of the containers. In our case the frontend network needs to have external connectivity as we had it in the default network before, but the backend network does not necessarily need to. The load balancer will be connected to both networks to serve the incoming requests and have connectivity to the application servers on the backend. The application servers will only have the backend network connected.
As you can see from the diagram we will also take advantage of assigning IP addresses to our containers from the new networks to get a fixed IP setup. This will make the whole setup a little bit easier to manage because we do not need to start the containers in any specific order to make sure they get a specific IP address as we did before in part one.
To begin with we take a look at the Docker networks we have out of the box using the docker network ls
command.
developer@devbox:~$ docker network ls
NETWORK ID NAME DRIVER SCOPE
6ba106d77aaa bridge bridge local
0d30fa1bbb5e host host local
91482370b08a none null local
As you can see from the output there are only the default Docker networks on my machine. The Docker network mode host for a container means, that it is not isolated from the Docker host network stack and the container does not get its own IP address allocated. The none network has all networking disabled. When you create a network without specifying any options, it creates a bridge network with non-overlapping subnetwork by default. Bridge networks are usually used for applications running in standalone containers that need to communicate.
Let’s take a closer look at the default bridge network bridge using docker network inspect bridge
command. You can either use the network name or the network id to inspect the network.
developer@devbox:~$ docker network inspect bridge
[
{
"Name": "bridge",
"Id": "6ba106d77aaa988cf9f1f7a776d859057e87c97de23e0d5b8c35009982a80dd1",
"Created": "2022-12-10T11:58:16.95713737Z",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.17.0.0/16",
"Gateway": "172.17.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {},
"Options": {
"com.docker.network.bridge.default_bridge": "true",
"com.docker.network.bridge.enable_icc": "true",
"com.docker.network.bridge.enable_ip_masquerade": "true",
"com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
"com.docker.network.bridge.name": "docker0",
"com.docker.network.driver.mtu": "1500"
},
"Labels": {}
}
]
As you can see from the output the Docker default bridge network has a local subnet of 172.17.0.0/16 with gateway 172.17.0.1. Let’s focus on two other important settings for us. To provide external connectivity from the Docker containers attached to the bridge network the option com.docker.network.bridge.enable_ip_masquerade needs to be set to true. This option enables IP masquerading which is another wording for NAT (Network Address Translation). Then we have the option com.docker.network.bridge.enable_icc which enables inter container connectivity. That means containers attached to the same network are able to communicate with each other. We will see the differences in a minute in practice.
At this point I want to highlight some (but not all) differences between the Docker default bridge and user-defined bridges:
User-defined bridges provide provide better isolation
This is one of the key differences why we create user-defined networks in our example because we want to isolate the frontend and backend. By default all containers are attached to the default bridge unless specifying another network using the --network
option.
Containers can be attached and detached from user-defined networks on the fly
By default you can’t remove a container from the default bridge without stopping it while you can connect or disconnect it from user-defined networks. This adds more flexibility to the container management.
User-defined bridges provide automatic DNS resolution between containers
The last difference I want to highlight is the automatic DNS resolution between containers which allows us to use the names of the containers instead of IP addresses to communicate with them.
For more information about Docker Networking and especially the Differences between user-defined bridges and the default bridge please look at the documentation links. Let’s move on and create our own Docker bridge networks.
First we create the backend network named backend-net without external connectivity. We use the docker network create -d bridge
command to create our new bridge network. We specify a subnet 172.21.0.0/16 and a gateway 172.21.0.1 using the corresponding --subnet
and --gateway
command options. In our example we want our backend application containers to not have connectivity to any external destinations therefore we will disable the IP masquerade option we talked about earlier. As we need connectivity between the containers on the backend, in our case between the load balancer and the app containers, we will enable the inter container connectivity.
docker network create -d bridge \
--subnet=172.21.0.0/16 \
--gateway=172.21.0.1 \
-o "com.docker.network.bridge.enable_ip_masquerade"="false" \
-o "com.docker.network.bridge.enable_icc"="true" \
backend-net
Similar to the backend we create the frontend network named frontend-net with external connectivity but without inter container connectivity. We use a subnet 172.20.0.0/16 and a gateway 172.20.0.1. The IP masquerade option will be enabled while we disable the inter container connectivity.
docker network create -d bridge \
--subnet=172.20.0.0/16 \
--gateway=172.20.0.1 \
-o "com.docker.network.bridge.enable_ip_masquerade"="true" \
-o "com.docker.network.bridge.enable_icc"="false" \
frontend-net
Now we check the available networks using the docker network ls
command again and if our new bridge networks were created.
developer@devbox:~$ docker network ls
NETWORK ID NAME DRIVER SCOPE
234c7a469fb6 backend-net bridge local
6ba106d77aaa bridge bridge local
8a6a5362e288 frontend-net bridge local
0d30fa1bbb5e host host local
91482370b08a none null local
Let’s take a closer look into the settings we specified during the creation using the docker network inspect
for both networks and compare settings.
developer@devbox:~$ docker network inspect backend-net
[
{
"Name": "backend-net",
"Id": "234c7a469fb689636906866b7a30855dad4c1a239627c7613e4f3241d692ebcd",
"Created": "2022-12-24T14:30:38.963979508Z",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": {},
"Config": [
{
"Subnet": "172.21.0.0/16",
"Gateway": "172.21.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {},
"Options": {
"com.docker.network.bridge.enable_icc": "true",
"com.docker.network.bridge.enable_ip_masquerade": "false"
},
"Labels": {}
}
]
For the backend network the settings were applied as we specified it. Inter container connectivity was set to true which means it is enabled and the IP masquerade option was set to false which means disabled. What about the frontend network?
developer@devbox:~$ docker network inspect frontend-net
[
{
"Name": "frontend-net",
"Id": "8a6a5362e2886f011bf798d41adbdbeddd8a9ada05912d0b33dfba38905a1e7b",
"Created": "2022-12-24T14:58:58.35827556Z",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": {},
"Config": [
{
"Subnet": "172.20.0.0/16",
"Gateway": "172.20.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {},
"Options": {
"com.docker.network.bridge.enable_icc": "false",
"com.docker.network.bridge.enable_ip_masquerade": "true"
},
"Labels": {}
}
]
It looks also good and the settings we want were applied. Let’s do some ping tests using a minimal Docker image based on Alpine Linux with a complete package index and only 5 MB in size.
We create two containers named test1 and test2. For test1 we assign an IP address of 172.21.0.10 and run it in detached mode in the background. For test2 we assign an IP address of 172.21.0.11 and run in interactive mode which means we are on the console after it started. Then we can start our ping tests. We use the -rm
command option which will remove the containers when we stop them.
# Run test1 container
developer@devbox:~$ docker run -itd --rm \
> --network=backend-net --ip=172.21.0.10 \
> --name test1 alpine
60bf5bb2752fab6b8849e55c63bc2d0cdc19d00ff18fc095aad251395f88aa5b
# Run test2 container
developer@devbox:~$ docker run -it --rm --network=backend-net --ip=172.21.0.11 --name test2 alpine
# Ping the gateway
/ ping 172.21.0.1
PING 172.21.0.1 (172.21.0.1): 56 data bytes
64 bytes from 172.21.0.1: seq=0 ttl=64 time=0.276 ms
64 bytes from 172.21.0.1: seq=1 ttl=64 time=0.121 ms
64 bytes from 172.21.0.1: seq=2 ttl=64 time=0.096 ms
64 bytes from 172.21.0.1: seq=3 ttl=64 time=0.097 ms
^C
--- 172.21.0.1 ping statistics ---
4 packets transmitted, 4 packets received, 0% packet loss
round-trip min/avg/max = 0.096/0.147/0.276 ms
# Ping test1 container
/ ping test1
PING test1 (172.21.0.10): 56 data bytes
64 bytes from 172.21.0.10: seq=0 ttl=64 time=0.293 ms
64 bytes from 172.21.0.10: seq=1 ttl=64 time=0.139 ms
64 bytes from 172.21.0.10: seq=2 ttl=64 time=0.113 ms
64 bytes from 172.21.0.10: seq=3 ttl=64 time=0.115 ms
^C
--- test1 ping statistics ---
4 packets transmitted, 4 packets received, 0% packet loss
round-trip min/avg/max = 0.113/0.165/0.293 ms
# Ping Google DNS server
/ ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8): 56 data bytes
^C
--- 8.8.8.8 ping statistics ---
4 packets transmitted, 0 packets received, 100% packet loss
/ exit
developer@devbox:~$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
60bf5bb2752f alpine "/bin/sh" 20 minutes ago Up 20 minutes test1
developer@devbox:~$
developer@devbox:~$ docker stop 60bf5bb2752f
As you can see from the output the backend bridge network works as expected. The gateway is reachable but the external connectivity is not. The connectivity between the containers is working and we used the automatic DNS resolution we talked about before to verify it. The test1 container name was resolved with the IP address we assigned to it. We exited out from test2 und stopped test1 which is then removed. Now we do the same tests for the frontend bridge network.
# Run test1 container
developer@devbox:~$ docker run -itd --rm \
> --network=frontend-net --ip=172.20.0.10 \
> --name test1 alpine
727d3bc5e0b45a49fbcd743cb18065d4c51c31f627d31719230eaeddae4391e6
# Run test2 container
developer@devbox:~$ docker run -it --rm --network=frontend-net --ip=172.20.0.11 --name test2 alpine
# Ping the gateway
/ ping 172.20.0.1
PING 172.20.0.1 (172.20.0.1): 56 data bytes
64 bytes from 172.20.0.1: seq=0 ttl=64 time=0.284 ms
64 bytes from 172.20.0.1: seq=1 ttl=64 time=0.129 ms
64 bytes from 172.20.0.1: seq=2 ttl=64 time=0.108 ms
64 bytes from 172.20.0.1: seq=3 ttl=64 time=0.102 ms
^C
--- 172.20.0.1 ping statistics ---
4 packets transmitted, 4 packets received, 0% packet loss
round-trip min/avg/max = 0.102/0.155/0.284 ms
# Ping test1 container
/ ping test1
PING test1 (172.20.0.10): 56 data bytes
^C
--- test1 ping statistics ---
4 packets transmitted, 0 packets received, 100% packet loss
# Ping Google DNS server
/ ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8): 56 data bytes
64 bytes from 8.8.8.8: seq=0 ttl=118 time=15.851 ms
64 bytes from 8.8.8.8: seq=1 ttl=118 time=14.852 ms
64 bytes from 8.8.8.8: seq=2 ttl=118 time=14.794 ms
64 bytes from 8.8.8.8: seq=3 ttl=118 time=14.801 ms
^C
--- 8.8.8.8 ping statistics ---
4 packets transmitted, 4 packets received, 0% packet loss
round-trip min/avg/max = 14.794/15.074/15.851 ms
/ exit
developer@devbox:~$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
727d3bc5e0b4 alpine "/bin/sh" 58 seconds ago Up 56 seconds test1
developer@devbox:~$ docker stop 727d3bc5e0b4
From the output we can see that the frontend bridge network also works as expected. The gateway and Google DNS server are reachable but the inter container connectivity is not working no matter the automatic DNS resolution worked.
Now everything is prepared to bring up all containers from our example while attaching them to our user-defined bridge networks we created.
Let’s run both application containers named myapp1 and myapp2 first and connect them to the backend bridge network backend-net using the --network
together with the --ip
option to assign a specific IP address from that network.
docker run -itd \
--network=backend-net --ip=172.21.0.101 \
--name myapp1 myapp
docker run -itd \
--network=backend-net --ip=172.21.0.102 \
--name myapp2 myapp
developer@devbox:~$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
d4aa6d87fe5d myapp "python3 main.py" 27 seconds ago Up 24 seconds 5000/tcp myapp2
902c09b50e38 myapp "python3 main.py" 34 seconds ago Up 32 seconds 5000/tcp myapp1
Both application containers are up and running. We can quickly use the docker inspect
command to check if the network settings were applied as expected. For better visibility I removed most of the output as it gives a lot of data back.
developer@devbox:~$ docker inspect myapp1
<output omitted>
"Networks": {
"backend-net": {
"IPAMConfig": {
"IPv4Address": "172.21.0.101"
},
"Links": null,
"Aliases": [
"902c09b50e38"
],
"NetworkID": "234c7a469fb689636906866b7a30855dad4c1a239627c7613e4f3241d692ebcd",
"EndpointID": "1ad831f2b470bb1250c94393ff6f19d26021aa4248eae921a54a17cda1a2e66e",
"Gateway": "172.21.0.1",
"IPAddress": "172.21.0.101",
<output omitted>
So far so good with the application containers. Before we start the load balancer container we need to adjust the nginx.conf file and update the image as we use other IP addresses than in part one. You can simply edit the existing files or make a copy of the files and edit the copy. I have created a new folder in the repository I used for this blog posts which you can find here. Switch to the lb directory, open the nginx.conf file, and update the IP addresses according the network diagram.
events {}
http {
upstream myapp {
server 172.21.0.101:5000;
server 172.21.0.102:5000;
}
server {
listen 8080;
server_name localhost;
location / {
proxy_pass http://myapp;
proxy_set_header Host $host;
}
}
}
Save the file and update the Docker image using the docker build . -t mylb
command from the lb directory.
developer@devbox:~/devnet-expert-lab/blog/docker/part2-files/lb$ docker build . -t mylb
Sending build context to Docker daemon 3.072kB
Step 1/4 : FROM nginx
---> ac8efec875ce
Step 2/4 : COPY nginx.conf /etc/nginx/nginx.conf
---> 5b956db57667
Step 3/4 : EXPOSE 8080
---> Running in 97df5e000e01
Removing intermediate container 97df5e000e01
---> 567098c461bb
Step 4/4 : CMD ["nginx", "-g", "daemon off;"]
---> Running in 871b051f5bf2
Removing intermediate container 871b051f5bf2
---> 59767fb921d4
Successfully built 59767fb921d4
Successfully tagged mylb:latest
Now we are ready to run the load balancer container named mylb1 exposing the tcp port 8080 as we did during part one. As the docker run
command only allows one network to be specified along with the --network
option we only connect the frontend bridge network in a first place and assign the IP address 172.20.0.100.
docker run -itd -p 8080:8080 \
--network=frontend-net --ip=172.20.0.100 \
--name mylb1 mylb
Then we use the docker network connect
command to attach the running load balancer container mylb1 to the backend network using the IP address 172.21.0.100.
docker network connect --ip 172.21.0.100 backend-net mylb1
Again we inspect the container to check if the network settings were applied for the load balancer as well.
developer@devbox:~$ docker inspect mylb1
<output omitted>
"Networks": {
"backend-net": {
"IPAMConfig": {
"IPv4Address": "172.21.0.100"
},
"Links": null,
"Aliases": [
"2f86fc7286cd"
],
"NetworkID": "234c7a469fb689636906866b7a30855dad4c1a239627c7613e4f3241d692ebcd",
"EndpointID": "9943ba4a79eddc99c37a5e14e6e372ad00f81bc828b3f8c3acca00ad002d4a57",
"Gateway": "172.21.0.1",
"IPAddress": "172.21.0.100",
"IPPrefixLen": 16,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "02:42:ac:15:00:64",
"DriverOpts": {}
},
"frontend-net": {
"IPAMConfig": {
"IPv4Address": "172.20.0.100"
},
"Links": null,
"Aliases": [
"2f86fc7286cd"
],
"NetworkID": "8a6a5362e2886f011bf798d41adbdbeddd8a9ada05912d0b33dfba38905a1e7b",
"EndpointID": "4de0671ff7e54e7a113dfcc468eae2af32713a50793fd61376764bec79a6b51a",
"Gateway": "172.20.0.1",
"IPAddress": "172.20.0.100",
"IPPrefixLen": 16,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "02:42:ac:14:00:64",
"DriverOpts": null
}
}
<output omitted>
Everything looks good so far. It is time to test the Docker networking setup we build. For that we try to access the load balancer container from the devbox itself using a curl 127.0.0.1:8080
command.
developer@devbox:~$ curl 127.0.0.1:8080
Welcome to the Docker Lab.<br>The IP address of the server is 172.21.0.101.<br>
developer@devbox:~$ curl 127.0.0.1:8080
Welcome to the Docker Lab.<br>The IP address of the server is 172.21.0.102.<br>
It works! The first request was sent to the first application container myapp1 with IP address 172.21.0.101 and the second request we made was load balanced to the second application container myapp2 with IP address 172.21.0.102. Let us test the connection with a web browser using the IP address of the devbox on port 8080. In my case it is http://192.168.11.51:8080/.
Now we refresh the web browser and get a response from the second application container as before with the curl
command.
Great everything worked, hopefully also for you if you followed along. After creating our own Docker images during part one we now extended our simple application framework example with some advantages of Docker Networking using user-defined bridge networks.
I hope it was again easy to follow and to replicate on your own setup. If you face into any issues with the setup or if you found any errors please let me know and/or leave a comment using the Github issues.
Thank you for reading this blog post and following along until the end. Stay tuned for the next blog post about Docker where we will further optimize the setup we build today using Docker Compose.
During this first blog series I will cover section 4.0 from the exam blueprint which is about Containers using Docker and Kubernetes. I will try to cover the four main bullet points below using a simple example but as I wrote before not in full sense.
4.1 Create a Docker image using Dockerfile
4.2 Package and deploy a solution by using Docker Compose
4.3 Package and deploy a solution by using Kubernetes
4.4 Create, consume, and troubleshoot a Docker host and bridge-based networks and integrate them with external networks
Hope you are excited about this blog series like I am and you follow my journey towards the Cisco Certified DevNet Expert. Let’s start!
In the first part of this blog series I will show you how to create Docker images using Dockerfile and then run containers using these images. In my example, I use three containers running as an application. The application will contain the following components: A NGNIX load balancer container on the frontend which balances the requests between two similar application containers.
The idea for this scenario came originally from the Cisco On Demand E-Learning course Developing Applications using Cisco Core Platforms and APIs (DEVCOR) v1.0 available on the Cisco Learning Network Store. There was a little more complex scenario used to demonstrate containerized applications using Docker. Additionally it contained a MYSQL database in the backend to store the data which was not a container. I want to keep it simple here and focus on Docker containers. Nevertheless I can highly recommend this course, especially for the labs used to demonstrate the topics.
Before we start, let me quickly explain my setup for this demonstration. I am using a simple lab setup in Cisco Modeling Labs (CML) with a Ubuntu 20.04 machine as devbox running Docker and external connectivity. My lab topology file is available here for download and import into CML. You could also use the official Candidate Workstation available for download on the Cisco Learning Network.
No matter what setup you use, make sure that Docker is running on your machine.
developer@devbox:~$ docker version
Client:
Version: 20.10.12
API version: 1.41
Go version: go1.16.2
Git commit: 20.10.12-0ubuntu2~20.04.1
Built: Wed Apr 6 02:14:38 2022
OS/Arch: linux/amd64
Context: default
Experimental: true
Server:
Engine:
Version: 20.10.12
API version: 1.41 (minimum version 1.12)
Go version: go1.16.2
Git commit: 20.10.12-0ubuntu2~20.04.1
Built: Thu Feb 10 15:03:35 2022
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: 1.5.9-0ubuntu1~20.04.4
GitCommit:
runc:
Version: 1.1.0-0ubuntu1~20.04.1
GitCommit:
docker-init:
Version: 0.19.0
GitCommit:
If Docker is not running on your machine you can check the Install Docker Engine documentation to look for instructions for your machine. I would also recommend to go through the Post-installation steps for Linux after you installed it.
Docker is already running on my machine. Now let’s check if there are any containers running or if there are any images:
developer@devbox:~$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
# no container
developer@devbox:~$ docker image ls
REPOSITORY TAG IMAGE ID CREATED SIZE
# no image
developer@devbox:~$
The docker ps
command lists containers and with the option -a
it shows all containers, because by default it shows only running containers. The docker image
command manages images and with the option ls
it lists all locally available images. Nothing there so far, we have a green field. Now we quickly try running an official Docker image called hello-world to test the Docker setup. I will explain the official Docker images in a minute.
developer@devbox:~$ docker run hello-world
Unable to find image 'hello-world:latest' locally
latest: Pulling from library/hello-world
2db29710123e: Pull complete
Digest: sha256:18a657d0cc1c7d0678a3fbea8b7eb4918bba25968d3e1b0adebfa71caddbc346
Status: Downloaded newer image for hello-world:latest
Hello from Docker!
This message shows that your installation appears to be working correctly.
To generate this message, Docker took the following steps:
1. The Docker client contacted the Docker daemon.
2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
(amd64)
3. The Docker daemon created a new container from that image which runs the
executable that produces the output you are currently reading.
4. The Docker daemon streamed that output to the Docker client, which sent it
to your terminal.
To try something more ambitious, you can run an Ubuntu container with:
$ docker run -it ubuntu bash
Share images, automate workflows, and more with a free Docker ID:
https://hub.docker.com/
For more examples and ideas, visit:
https://docs.docker.com/get-started/
As you can see from the output the command started a container using the image hello-world. Docker could not find it locally and then the steps described above were done. If we list now the containers and images again we will see more than before:
developer@devbox:~$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
824970e1855e hello-world "/hello" 22 minutes ago Exited (0) 3 minutes ago objective_neumann
developer@devbox:~$ docker image ls
REPOSITORY TAG IMAGE ID CREATED SIZE
hello-world latest feb5d9fea6a5 13 months ago 13.3kB
developer@devbox:~$
The container was created but stopped after it streamed the output to my terminal as you can see from the status. The hello-world image is now available locally. If you run it again there is no need to download it. Now I will show you how we create our own images from a Dockerfile and create containers running for the scenario described before.
First let’s create two folders app and lb to separate the Dockerfiles and container specific files.
developer@devbox:~$ mkdir app lb
developer@devbox:~$ tree
.
├── app
└── lb
2 directories, 0 files
Then we create a Dockerfile with the filename Dockerfile for the application image in the app directory. The Dockerfile will later be recognized when we run the docker build .
command from the app folder to build the image.
FROM python:3.9
COPY . /app
WORKDIR /app
RUN python3 -m venv venv
RUN venv/bin/python3 -m pip install --upgrade pip
RUN venv/bin/pip install flask
EXPOSE 5000/tcp
CMD ["venv/bin/python3", "main.py"]
We use the official Docker image python from Docker Hub in version 3.9 and specify it with the FROM statement. Official Docker images are designed for most common use cases. They have clear documentation and use Docker best practices. The COPY statement is used to copy local files to a directory on the container. Then we set the working directory with WORKDIR for the app. After that the container creates a virtual environment, upgrades pip, and installs the Python library flask which is a lightweight web application framework which is used in the main.py file. We will look at it in a minute. The app will start with the CMD statement. Before that the EXPOSE statement is used to enable the container listening on the specified port, in our case tcp port 5000. The statements in the Dockerfile are called stages. When we build the Docker image with we will see the different stages.
Let’s quickly create the main.py file for the application itself in which we use the two Python libraries flask and socket.
from flask import Flask
import socket
ip = socket.gethostbyname(socket.gethostname())
app = Flask(__name__)
@app.route('/')
def home():
out = (
f'Welcome to the Docker Lab.<br>'
f'The IP address of the server is {ip}.<br>'
)
return out
if __name__ == '__main__':
app.run(debug=True, host='0.0.0.0')
With socket, a low-level networking interface, we grab the IP address of the application server and flask provides the web application to display content including the IP address to verify the load balancing functionality. Now we run docker build . -t myapp:1.0
command where -t myapp:1.0 option stands for tag and specifies the image name and optionally a tag in the name:tag format.
developer@devbox:~/app$ docker build . -t myapp:1.0
Sending build context to Docker daemon 3.072kB
Step 1/8 : FROM python:3.9
---> 7d357ce6a803
Step 2/8 : COPY . /app
---> bd1bef94fffe
Step 3/8 : WORKDIR /app
---> Running in d6807295943e
Removing intermediate container d6807295943e
---> b9f26b79d4d8
Step 4/8 : RUN python3 -m venv venv
---> Running in 6e27cd4ea7a4
Removing intermediate container 6e27cd4ea7a4
---> 949b9f4e27d5
Step 5/8 : RUN venv/bin/python3 -m pip install --upgrade pip
---> Running in 8fb9723c1966
Requirement already satisfied: pip in ./venv/lib/python3.9/site-packages (22.0.4)
Collecting pip
Downloading pip-22.3.1-py3-none-any.whl (2.1 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 2.1/2.1 MB 6.2 MB/s eta 0:00:00
Installing collected packages: pip
Attempting uninstall: pip
Found existing installation: pip 22.0.4
Uninstalling pip-22.0.4:
Successfully uninstalled pip-22.0.4
Successfully installed pip-22.3.1
Removing intermediate container 8fb9723c1966
---> 4c300b4fba45
Step 6/8 : RUN venv/bin/pip install flask
---> Running in 40bb810d8cdb
Collecting flask
Downloading Flask-2.2.2-py3-none-any.whl (101 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 101.5/101.5 kB 2.2 MB/s eta 0:00:00
Collecting Jinja2>=3.0
Downloading Jinja2-3.1.2-py3-none-any.whl (133 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 133.1/133.1 kB 2.5 MB/s eta 0:00:00
Collecting Werkzeug>=2.2.2
Downloading Werkzeug-2.2.2-py3-none-any.whl (232 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 232.7/232.7 kB 4.7 MB/s eta 0:00:00
Collecting itsdangerous>=2.0
Downloading itsdangerous-2.1.2-py3-none-any.whl (15 kB)
Collecting click>=8.0
Downloading click-8.1.3-py3-none-any.whl (96 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 96.6/96.6 kB 2.7 MB/s eta 0:00:00
Collecting importlib-metadata>=3.6.0
Downloading importlib_metadata-5.1.0-py3-none-any.whl (21 kB)
Collecting zipp>=0.5
Downloading zipp-3.11.0-py3-none-any.whl (6.6 kB)
Collecting MarkupSafe>=2.0
Downloading MarkupSafe-2.1.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (25 kB)
Installing collected packages: zipp, MarkupSafe, itsdangerous, click, Werkzeug, Jinja2, importlib-metadata, flask
Successfully installed Jinja2-3.1.2 MarkupSafe-2.1.1 Werkzeug-2.2.2 click-8.1.3 flask-2.2.2 importlib-metadata-5.1.0 itsdangerous-2.1.2 zipp-3.11.0
Removing intermediate container 40bb810d8cdb
---> 7c869e9d37ec
Step 7/8 : EXPOSE 5000/tcp
---> Running in d1e17c7bc62a
Removing intermediate container d1e17c7bc62a
---> d7e0219df4d5
Step 8/8 : CMD ["python3", "main.py"]
---> Running in 95c1b083da90
Removing intermediate container 95c1b083da90
---> 952e8a95ab6f
Successfully built 952e8a95ab6f
Successfully tagged myapp:1.0
Yeah, we did successfully build a Docker image. As you you can see from the output there were eight stages completed during the image build process according to the Dockerfile. The stages 7 + 8 which are related to the EXPOSE and CMD statements will be executed during image run. Let’s check if the image is there and how it look like.
developer@devbox:~/app$ docker image ls
REPOSITORY TAG IMAGE ID CREATED SIZE
myapp 1.0 952e8a95ab6f 3 minutes ago 953MB
python 3.9 7d357ce6a803 2 days ago 915MB
hello-world latest feb5d9fea6a5 14 months ago 13.3kB
With docker image inspect myapp
you could take a look into the details of the image. I did not add the output here to avoid overloading this post with more information. If you would like to make a change to your Docker image you need to simply change the Dockerfile and run docker build . -t myapp
again. It will create a new image with the tag latest. I did it without any changes and therefore the image id stays the same.
developer@devbox:~/app$ docker image ls
REPOSITORY TAG IMAGE ID CREATED SIZE
myapp 1.0 be6d20d775cc About a minute ago 953MB
myapp latest be6d20d775cc About a minute ago 953MB
# output omitted
We can start a new Docker container using the myapp image with docker run --rm -it -p 5000:5000 myapp
which will take the latest version of our image. We used the options –rm to remove the container after exiting, -i for interactive and -t to allocate a pseudo-TTY. We also need to make the app available to external for us to test with -p which publishes a container’s port to the host using container port:host port syntax.
developer@devbox:~/app$ docker run -it -p 5000:5000 myapp
* Serving Flask app 'main'
* Debug mode: on
WARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead.
* Running on all addresses (0.0.0.0)
* Running on http://127.0.0.1:5000
* Running on http://172.17.0.2:5000
Press CTRL+C to quit
* Restarting with stat
* Debugger is active!
* Debugger PIN: 454-847-600
For now we don’t care about networking. By default, new Docker containers will be added to the default bridge network and will be able to communicate with other containers on that network. That should be enough to know for now because I will cover Docker networking on the next blog post.
Open a web browser and connect to the IP address of the devbox on port 5000 which is in my case http://192.168.11.51:5000/ and you should get the page:
Take a look at the debug output from the flask app and see the successful GET request.
# output omitted
192.168.11.1 - - [10/Dec/2022 15:28:14] "GET / HTTP/1.1" 200 -
# output omitted
Hit CTRL+C to exit the app, the container will stop, and removed. Our APP container image is working. Now let’s take a look at the image for the load balancer.
We change into the lb directory and create another Dockerfile with the filename Dockerfile for the load balancer image. We use the latest nginx Docker image which is an open source reverse proxy server, as well as a load balancer, HTTP cache, and a web server.
FROM nginx
COPY nginx.conf /etc/nginx/nginx.conf
EXPOSE 8080/tcp
CMD ["nginx", "-g", "daemon off;"]
We use again the COPY statement to copy the configuration file to the image and use the EXPOSE statement to let the container listening on tcp port 8080. Last but not least we use the CMD statement to start the load balancer using the configuration below.
events {}
http {
upstream myapp {
server 172.17.0.2:5000;
server 172.17.0.3:5000;
}
server {
listen 8080;
server_name localhost;
location / {
proxy_pass http://myapp;
proxy_set_header Host $host;
}
}
}
We need to put in the servers between we want to load balance the traffic including the port and some basics settings. For more details about the nginx services please go to the nginx Documentation. We need to create those configuration and save it as nginx.conf in the lb folder. The we can build our Docker load balancer image with the label lb using docker build . -t lb
.
developer@devbox:~/lb$ docker build . -t lb
Sending build context to Docker daemon 3.072kB
Step 1/4 : FROM nginx
latest: Pulling from library/nginx
025c56f98b67: Pull complete
ca9c7f45d396: Pull complete
ed6bd111fc08: Pull complete
e25b13a5f70d: Pull complete
9bbabac55ab6: Pull complete
e5c9ba265ded: Pull complete
Digest: sha256:ab589a3c466e347b1c0573be23356676df90cd7ce2dbf6ec332a5f0a8b5e59db
Status: Downloaded newer image for nginx:latest
---> ac8efec875ce
Step 2/4 : COPY nginx.conf /etc/nginx/nginx.conf
---> 87b2c9fa3285
Step 3/4 : EXPOSE 8080
---> Running in f36fda5062e2
Removing intermediate container f36fda5062e2
---> d55ea235188e
Step 4/4 : CMD ["nginx", "-g", "daemon off;"]
---> Running in bb552d7895c2
Removing intermediate container bb552d7895c2
---> 8776a416d609
Successfully built 8776a416d609
Successfully tagged lb:latest
Perfect, we successfully created our load balancer image. As you can see again we have four stages during the Docker image creation. Now our local image library has grown.
developer@devbox:~/lb$ docker image ls
REPOSITORY TAG IMAGE ID CREATED SIZE
lb latest 8776a416d609 2 days ago 142MB
myapp 1.0 be6d20d775cc 2 days ago 953MB
myapp latest be6d20d775cc 2 days ago 953MB
python 3.9 7d357ce6a803 4 days ago 915MB
nginx latest ac8efec875ce 6 days ago 142MB
hello-world latest feb5d9fea6a5 14 months ago 13.3kB
Now let’s bring all containers up and test the load balancing feature.
As we don’t focus on Docker networking for now, we need to start the application containers from the design first that they get the first and second IP address as we specified in the load balancer configuration file. The command docker run -itd myapp
will start the application container in the background using the additional -d statement for detached mode. We need to run this command two times to get two application containers running.
Then we run docker run -itd -p 8080:8080 lb
to create and start the load balancer container also in detached mode and exposed port 8080 to make in available for us. Let’s check the running containers with docker ps
and we should have three running containers.
developer@devbox:~/lb$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
5ca2a4ed8dcf lb "/docker-entrypoint.…" About a minute ago Up About a minute 80/tcp, 0.0.0.0:8080->8080/tcp, :::8080->8080/tcp nostalgic_torvalds
50dc6ca1ce05 myapp "venv/bin/python3 ma…" 7 minutes ago Up 7 minutes 5000/tcp vigilant_wright
46f85ddcd5cd myapp "venv/bin/python3 ma…" 7 minutes ago Up 7 minutes 5000/tcp gracious_margulis
It looks good, so let’s test the connection with a web browser using the IP address of the devbox again but this time on port 8080 for the load balancer looks like http://192.168.11.51:8080/ in my case. We should get the following result:
Now we refresh the web browser and should get a response from the second server as the default behavior of the load balancer is round-robin:
Great it worked! We build a simple application framework with two servers and a load balancer in front of it all in Docker containers from our own created images. I hope it was easy to follow and to replicate on your own setup. If you face into any issues with the setup or if you found any errors please let me know and/or leave a comment using the Github issues.
Thank you for reading this blog post and following along until the end. Stay tuned for the next blog post about Docker where we will optimize the setup we build today using Docker networking.
I was recently playing around with CML while creating a new lab for testing the Cisco pyATS framework. I came very quickly to the point to use the official Python library virl2_client for CML which provides a Python package to programmatically create, edit, delete, and control your network simulations on a CML controller.
In my case I wanted to create a pyATS testbed automatically from a lab on the CML controller rather than creating it manually. The automatic process will safe me some time in the future and I will be able to focus on creating my test cases using pyATS. During this process I stumbled over an issue which had an obvious solution. Let’s take a look.
As I got a new laptop at this time I had not installed all the tools I used before and also did not pull all the repository I was working on. So I started to clone my private repository I was previously working on, created a new virtual environment, and installed pyATS using pip, the package installer for Python:
(pyats-test) pip install pyats
Note: There are different options how to install pyATS described in the pyATS documentation.
After that I installed the virl2-client library:
(pyats-test) pip install virl2-client
Now I thought I could quickly create my testbed and start with testing. I used a small Python script which I copied from the blog post “How can I automate device configurations using CML2?” by Hank Preston which contains a very good introduction and explanation how to use CML and pyATS.
from virl2_client import ClientLibrary
# Create a client object for interacting with CML
client = ClientLibrary("https://<YOUR-CML-IP/URL>", "<CML-USER", "CML-PASSWORD", ssl_verify=False)
# Find your lab. Method returns a list, this assumes the first lab returned is what you want
lab = client.find_labs_by_title("Multi Platform Network")[0]
# Retrieve the testbed for the lab
pyats_testbed = lab.get_pyats_testbed()
# Write the YAML testbed out to a file
with open("lab_testbed.yaml", "w") as f:
f.write(pyats_testbed)
You have to fill in your CML data or even better use environment variables. Add the name of your lab, and specify the output file for the testbed. The script does the rest for you and with quite good comments self-explanatory. But this is the step where I faced into the issue after I ran the Python script:
(pyats-test) $ python create_testbed.py
SSL Verification disabled
Traceback (most recent call last):
File "/Users/danielkuhl/Coding/pyats-test/create_testbed.py", line 4, in <module>
client = ClientLibrary("https://<YOUR-CML-IP/URL>", "<CML-USER", "CML-PASSWORD", ssl_verify=False)
File "/Users/danielkuhl/Coding/pyats-test/lib/python3.9/site-packages/virl2_client/virl2_client.py", line 281, in __init__
self.check_controller_version()
File "/Users/danielkuhl/Coding/pyats-test/lib/python3.9/site-packages/virl2_client/virl2_client.py", line 429, in check_controller_version
raise InitializationError(
virl2_client.virl2_client.InitializationError: Controller version 2.2.3+build63 is marked incompatible! List of versions marked explicitly as incompatible: [2.0.0, 2.0.1, 2.1.0, 2.1.1, 2.1.2, 2.2.1, 2.2.2, 2.2.3].
The check of the controller version failed. The error told me that the virl2_client version I was using is incompatible with the controller version. At this time I was confused and not aware about the fact that the virl2_client version need to match with the controller version. That’s why I thought I stumbled over a bug and too quickly created my first Github issue ever.
I might have checked the error message more carefully and also double check the documentation. As you can see from the Github issue the developers of virl2_client responded very fast and mentioned that the error was expected with my controller version. Another lessons learned on the journey.
First I checked the virl2_client version and by default it installs the latest version which is 2.4.0 according to the latest CML controller version 2.4.0 released. My CML controller version is the recommended version 2.2.3 as of time of writing this article:
(pyats-test) $ pip list | grep virl2
virl2-client 2.4.0
With that I confirmed the version mismatch. Then uninstall virl2_client:
(pyats-test) $ pip uninstall virl2_client
Found existing installation: virl2-client 2.4.0
Uninstalling virl2-client-2.4.0:
Would remove:
/Users/danielkuhl/Coding/pyats-test/lib/python3.9/site-packages/examples/demo.ipynb
/Users/danielkuhl/Coding/pyats-test/lib/python3.9/site-packages/examples/licensing.py
/Users/danielkuhl/Coding/pyats-test/lib/python3.9/site-packages/examples/link_conditioning.py
/Users/danielkuhl/Coding/pyats-test/lib/python3.9/site-packages/examples/sample.py
/Users/danielkuhl/Coding/pyats-test/lib/python3.9/site-packages/virl2_client-2.4.0.dist-info/*
/Users/danielkuhl/Coding/pyats-test/lib/python3.9/site-packages/virl2_client/*
Proceed (Y/n)? Y
Successfully uninstalled virl2-client-2.4.0
And re-install with specifying the latest version less than 2.3.0 which is 2.2.1.post2 which I checked on the documentation after the virl2-client developer pointed me to that:
(pyats-test) $ pip install "virl2-client<2.3.0"
Collecting virl2-client<2.3.0
Using cached virl2_client-2.2.1.post2-py3-none-any.whl (52 kB)
Requirement already satisfied: requests<3,>=2 in ./lib/python3.9/site-packages (from virl2-client<2.3.0) (2.28.1)
Requirement already satisfied: requests-toolbelt<0.10.0,>=0.9.1 in ./lib/python3.9/site-packages (from virl2-client<2.3.0) (0.9.1)
Requirement already satisfied: charset-normalizer<3,>=2 in ./lib/python3.9/site-packages (from requests<3,>=2->virl2-client<2.3.0) (2.1.0)
Requirement already satisfied: idna<4,>=2.5 in ./lib/python3.9/site-packages (from requests<3,>=2->virl2-client<2.3.0) (3.3)
Requirement already satisfied: certifi>=2017.4.17 in ./lib/python3.9/site-packages (from requests<3,>=2->virl2-client<2.3.0) (2022.6.15)
Requirement already satisfied: urllib3<1.27,>=1.21.1 in ./lib/python3.9/site-packages (from requests<3,>=2->virl2-client<2.3.0) (1.26.11)
Installing collected packages: virl2-client
Successfully installed virl2-client-2.2.1.post2
Now I was back on the right path. Let’s check if creating of the testbed was working now.
Run the Python script again:
(pyats-test) $ python create_testbed.py
SSL Verification disabled
Validate testbed
(pyats-test) $ pyats validate testbed lab_testbed.yaml
Loading testbed file: lab_testbed.yaml
--------------------------------------------------------------------------------
Testbed Name:
LAB-TEST
Testbed Devices:
|-- host-01 [linux/server]
| |-- eth0 ----------> l14
| `-- eth1 ----------> l16
|-- rtr-edge [ios/iosv]
| |-- GigabitEthernet0/0 ----------> l1
| |-- GigabitEthernet0/1 ----------> l7
| |-- GigabitEthernet0/2 ----------> l8
| |-- GigabitEthernet0/3
| `-- Loopback0 ----------> rtr-edge:Loopback0
|-- server-01 [linux/server]
| |-- eth0 ----------> l15
| `-- eth1 ----------> l17
|-- sw-acc-01 [ios/iosv]
| |-- GigabitEthernet0/0 ----------> l5
| |-- GigabitEthernet0/1 ----------> l12
| |-- GigabitEthernet0/2 ----------> l16
| |-- GigabitEthernet0/3
| `-- Loopback0 ----------> sw-acc-01:Loopback0
|-- sw-acc-02 [ios/iosv]
| |-- GigabitEthernet0/0 ----------> l6
| |-- GigabitEthernet0/1 ----------> l13
| |-- GigabitEthernet0/2 ----------> l17
| |-- GigabitEthernet0/3
| `-- Loopback0 ----------> sw-acc-02:Loopback0
|-- sw-core [ios/iosv]
| |-- GigabitEthernet0/0 ----------> l2
| |-- GigabitEthernet0/1 ----------> l8
| |-- GigabitEthernet0/2 ----------> l9
| |-- GigabitEthernet0/3 ----------> l10
| `-- Loopback0 ----------> sw-core:Loopback0
|-- sw-dist-01 [ios/iosv]
| |-- GigabitEthernet0/0 ----------> l3
| |-- GigabitEthernet0/1 ----------> l9
| |-- GigabitEthernet0/2 ----------> l11
| |-- GigabitEthernet0/3 ----------> l12
| `-- Loopback0 ----------> sw-dist-01:Loopback0
|-- sw-dist-02 [ios/iosv]
| |-- GigabitEthernet0/0 ----------> l4
| |-- GigabitEthernet0/1 ----------> l10
| |-- GigabitEthernet0/2 ----------> l11
| |-- GigabitEthernet0/3 ----------> l13
| `-- Loopback0 ----------> sw-dist-02:Loopback0
`-- terminal_server [linux/linux]
YAML Lint Messages
------------------
Warning Messages
----------------
- Device 'terminal_server' has no interface definitions
Well, this output looked much better than before. As you can see from the output all devices from the lab are listed there including the link labels besides the interfaces. Each device interfaces which contain the same link labels are connected with each other. This information gives you some additional opportunities for automating things. You could use link labels for configurations on both ends of devices which are connected or even drawing a full network topology to mention only some first ideas coming to my mind.
Hope you liked this small journey about a really good lessons-learned for me in regards to software version compatibility checks or reading the documentation before creating a Github issue. As I wrote it was the first time I created a public Github issue and it was good way to learn. I had to read about how to create a Github issue and what information is useful to put in. I can highly recommend to read through the section About Github issues on Github.
Below you will find all the links used in this article. Thank you so much for reading. Please feel free to leave a comment or get in contact with me on Social Media if not happened yet.
I had the great pleasure to meet a lot of my Social Media connections from LinkedIn and Twitter, and of course from the Cisco Learning Network and from the Cisco Champion group. It was a lot of fun and a pleasure to built even more close relationships and friendships with many people I only knew virtual until then.
My personal focus of technical sessions was pretty much dominated by topics around the Cisco DevNet Expert certification. After achieving the Cisco Certified DevNet Professional in June I started collecting information about the lab exam for the preparation journey. Therefore I booked a dedicated techtorial session, a related hands-on lab session and I spent a lot of time in the DevNet zone attending various workshops.
By the way if you are interested in achieving the Cisco Certified DevNet Professional I wrote a knowledge base article which was published in the Cisco Learning Network about my Road to the Cisco Certified DevNet Professional. Please visit it if you are interested and leave a comment with a feedback.
Below you will find the list of sessions I attended during Cisco Live but missing some DevNet workshops I was just listening and not actively participating because I didn’t get a seat:
Most of sessions except the extra paid sessions are also available on the Cisco Live on-demand library or on the Cisco Developer Network Learning Lab Center. The links on the list will take you to the sessions available. Now I want to highlight the extra paid sessions I attended and share what was it about.
The four hour techtorial session was very good to get additional insights of the DevNet Expert lab exam. The attendees got many more details how to treat the exam topics during the preparation and that you need carefully go through each topic. Another important thing was how to build your strategy according to the lab format which contains two modules, a three hour “Design” and five hour “Develop, Test, Deploy and Maintain” module. We walked through some sample lab tasks to get a better understanding about the scenarios and its constraints you have to deal with during the exam. It is so important to read carefully and categorize the different parts of the questions into tasks and constraints. It was also very helpful to ask questions directly to the team who built the exam and follow the questions of other attendees. I can highly recommend to book such a session when you plan to take an expert level certification.
The four hour lab session was so much fun. There was a lab environment in Cisco Modeling Labs prepared which contained a test and prod environment as well as a developer machine. You had to master different things like manage network configuration in the Git version control system, leverage network test scripts using the pyATS framework and Cisco Modeling Labs APIs. After that you need to build a Continuous Integration / Continuous Deployment (CI/CD) pipeline definition that automatically deployed network changes using Ansible into the virtual network provided by CML. Final step was to tun the pyATS test scripts to validate the changes and deploy the same configuration into production. It was a really cool lab and very well prepared.
One of my personal highlight aside the technical sessions was the VIP dinner of the Cisco Learning Network and Cisco Community. It was really a huge honor and a pleasure for me to be there. I had the chance to connect with the fellow VIP’s and the community team who builds, maintains, and evolves this awesome community. We had a delicious dinner and some cold drinks at very nice bar called the House of Blues located at the Mandalay Bay Hotel. Thank you very much to whole community team for this evening!
Also many thanks to the Cisco Champion organization team! They did a really awesome job to provide a lot of tours around the Cisco Live which gave us insights while the looking behind the scenes of the Cisco Live NOC, Cisco TV, DevNet Zone, Cisco Store, and many more. It was always nice to look over at the Champion’s lounge and catch up with some fellows while grabbing a coffee.
The next Cisco Live event for me will be probably the Cisco Live 2023 Amsterdam from February 6-10, 2023. I’m looking forward to see all the friends I made in Las Vegas again in Amsterdam and I hope that there will be more people I already know virtually to connect with in-person. Until then, stay safe and healthy!
]]>The Cisco Designated VIP program recognized the top contributors in Cisco’s online communities known as the Cisco Learning Network and the Cisco Support Community. These VIPs earned this status by helping people in those online communities and sharing their knowledge and expertise. The experience and the quality of the Cisco Designated VIPs contributions is proofed and can be trusted.
I was taken into account because of my contributions in the Cisco Learning Network. Especially during the last year on which I was really heavily active in the Cisco Modeling Labs Personal and Cisco DevNet Certifications communities. It is so much fun to help other people with issues or help getting familiar with the community, technologies, and tools.
Sometimes it is only about putting people into the right direction, showing where to find more information and/or documentation. There are many questions which seem to be solved easily, but we all know situations where we were stuck into an issue and didn’t find a solution ourself. The Cisco Learning Network helped us to find a solution in previous discussions or by starting a new thread about our problem and we got help from other community members. Don’t be shy, don’t think about to hide beginner questions, because we were all beginners sometime and there are definitely no stupid questions, only those which were not asked.
Overall the Cisco Learning Network is a very helpful and kind community and I can highly recommend to join and contribute. Below you will find a collection of links about getting started on the Cisco Learning Network and more details about the Cisco VIP program. Start your journey in the Cisco Learning Network community and check out the requirements to become a Cisco Designated VIP. Please take a look at the new forum for the Community Impact Program and consider to participate here. It is all about giving back.
If you have any comments or questions, don’t hesitate to contact me. You will find all information how to contact me at the button of the page. Thanks for reading.
]]>Recently during the Christmas holidays I was writing another post and reviewed the setup of my blog. Needless to say that I have not delivered much content in 2021, but I promise to be better in 2022. It was not only about the missing content, it was more about the layout and I was not satisfied with the either current Wordpress template nor with Wordpress overall too. I wanted to have a simple and flat design without any further modules and add-ons which I have to manage and integrate. I want to focus on content in 2022.
Then I saw an article about Github pages which allows you to host your web pages directly from your GitHub repository and I started immediately testing Setting up a Github pages site with Jekyll. As I am very focussed on Network Automation these days I liked the idea of treating my blog pages as code. After some reading through the docs I discovered that a Git push to Github triggers the automatic deployment and publishes of your pages. This was convincing even more and I started to rebuild my blog page in VS Code.
The setup on Github is good explained and easy to deploy. Also the first steps with Jekyll and Ruby went well. The Jekyll documentation was not so easy to follow for me, but there is a tutorial which I recommend to go through. This will help you to understand the basics and you can evolve on top of that.
In less than an hour I have build the first skelton of my new blog page. My pre knowledge with HTML and experience with Markdown helped a lot to make fast progress. Now I spent a lot of time with getting to know all the various options and features to fine tune. There is also a very helpful option to build your web page locally during developing and preview at the server address http://127.0.0.1:4000/ before pushing your changes.
The only drawback at this time was that my blog page would then be hosted on Github with a new URL like https://username.github.io, in my case https://daniel1820815.github.io.
After some research I found the Deploy Now feature provided by my hosting provider IONOS which allows you to connect your GitHub account and instantly deploy your static web projects to your selected domain.
Every time you push your local changes to your master repository on Github the IONOS Deploy Now Bot starts the workflow and deploys the web page with the latest changes.
If you don’t want your changes to be deployed immediately after pushing the repository you could create a branch and work within the branch on your new stuff. After you completed your work, you only need to merge your branch into master and the automatic deployment will do the rest you.
Take a look at my Github repository for this blog pages. I will try to focus now more on the content of my blog, but alongside I will take a look at the options of Github pages and Jekyll. I hope you enjoyed this short side trip. Please reach out to me via email or Social Media if you have any questions. You will find all details below the post in the footer.
Here is some additional information about Jekyll docs for more info on how to get the most out of Jekyll. Find all bugs/feature requests at Jekyll’s GitHub repo. If you have questions, you can ask them on Jekyll Talk.
]]>