Limitless Networking

Author: Rick Mur (Page 7 of 10)

Using the Junos Space REST API

Automation is going to be fundamental in all networking products. I’ve been working a lot on integrating Juniper products in existing and standard software. There are many different ways to automate something on a network running Junos. Using REST (or RESTful) APIs is one way of doing this. The reason I’m using REST is that it’s fairly easy to understand, but the best thing is that a large amount of existing products supports REST to integrate with it.

The goal of this blog is to explain how Junos products support REST, compatibility with older versions and how it scales.

What is REST?

REST (REpresentational State Transfer) is a simple stateless architecture that generally runs over HTTP. There are 4 commonly supported commands. When you issue a command your input data consists of a URL, HTTP headers and a body holding the data.

HTTP Headers are used for things like Authentication and a Content Type to let the application know what data format the body will contain.
The URL specifies which data you want to receive from the application or you want to change.
The body is empty in a request for data, when you want to change some data this is typically XML or JSON data (defined by the Content-Type HTTP Header).

The commonly supported commands are:

GET – Request data from the application
PUT – Change/Update data
POST – Add new data
DELETE – Delete existing data It’s very important to know which commands your application supports.

It could be that an application only supports PUT commands instead of POST, so data you submit will override any existing entries (as PUT is a Change/Update action).

To test commands on a REST API there are many great Browser plugins that will let you create a REST call step-by-step. Below are 2 free and great plugins:

Postman for Chrome

RESTClient for Firefox

Junos and REST

There are 2 options to use REST on Junos products. The first one is to run it against Junos itself. Starting in Junos 14.2 there is full support for any feature on a box to be accessed through the REST API. In this blog I will not focus on this, because I want to look at a more scalable solution that is also compatible with older versions. All information regarding the Junos REST API is available on the public documentation page: Junos 14.2 REST API Guide

The second option which supports any Junos version is by using the Juniper network management platform: Junos Space.

Junos Space is a platform that hosts applications that manage your Juniper network devices. Any Juniper device running Junos is supported. Applications like Security Director, Network Director, etc. are applications by Juniper, but there are also applications written by partners for various other purposes. Applications have to be installed separately. In this blog I will be using the foundation: Network Management Platform (or NMP).

This platform will connect to all of your devices running Junos and will use NETCONF as a southbound protocol to communicate. It will detect which version of Junos the device is running and will select the correct schema. A schema consists of all possible configuration for that particular hardware platform and software version. The REST API of the NMP in Junos Space covers 100% of the commands that you can access in the GUI as well, so I have a full features management platform that I can access with a REST API and access my Junos devices through it. Very scalable and easily managed.

Let’s issue our first REST command to our Junos Space platform:

Screenshot 2015-01-02 12.11.48

We first need to authenticate ourselves, just like we would on the GUI. We do this by adding a Authentication HTTP header. Your REST Client should support encoding the credentials for you and generating the header.

After I added my credentials we see that I have access to all kinds of different services.
Screenshot 2015-01-02 12.14.00-edit

If I want to dive deeper I just follow the link given in the service.

Device Management

As an example I want to take a look at the devices currently registered to my Junos Space platform. I follow the link towards Device management and then Devices.

<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<devices uri="/api/space/device-management/devices" size="3">
    <device href="/api/space/device-management/devices/131087" uri="/api/space/device-management/devices/131087" key="131087">
        <deviceFamily>junos-es</deviceFamily>
        <OSVersion>12.1X46-D10.2</OSVersion>
        <platform>SRX210H-POE</platform>
        <serialNumber>XXXXXXX</serialNumber>
        <connectionStatus>up</connectionStatus>
        <ipAddr>172.22.1.4</ipAddr>
        <managedStatus>In Sync</managedStatus>
        <name>MaverickSRX</name>
        <domain-id>2</domain-id>
        <domain-name>Global</domain-name>
    </device>
    <device href="/api/space/device-management/devices/131095" uri="/api/space/device-management/devices/131095" key="131095">
        <deviceFamily>junos</deviceFamily>
        <OSVersion>14.1R2.12</OSVersion>
        <platform>VMX</platform>
        <connectionStatus>up</connectionStatus>
        <ipAddr>172.22.1.151</ipAddr>
        <managedStatus>In Sync</managedStatus>
        <name>vMX1</name>
        <domain-id>2</domain-id>
        <domain-name>Global</domain-name>
    </device>
    <device href="/api/space/device-management/devices/131099" uri="/api/space/device-management/devices/131099" key="131099">
        <deviceFamily>junos</deviceFamily>
        <OSVersion>14.1R2.12</OSVersion>
        <platform>VMX</platform>
        <connectionStatus>up</connectionStatus>
        <ipAddr>172.22.1.152</ipAddr>
        <managedStatus>In Sync</managedStatus>
        <name>vMX2</name>
        <domain-id>2</domain-id>
        <domain-name>Global</domain-name>
    </device>
</devices>

The output given by default is in XML format. If you like to work with JSON instead, this is supported as well. We just have to add another header. In a typical REST API I have a generic Content Type (in PUT/POST) or Accept (in GET) header for this called usually: application/xml or application/json. Junos Space however, requires a specific string for each request depending on what you want to see. To look this specific string up you can check the online documentation which is extremely well written and detailed! Junos Space REST API Guide Device Management

In the guide we find that we have to specify an Accept header of: application/vnd.net.juniper.space.device-management.devices+json;version=1.

Then we issue the same request again, but now with the Accept header added.

Request

GET /api/space/device-management/devices HTTP/1.1
Host: maverickspace.maverick.local
Authorization: Basic XXXXXX=
Accept: application/vnd.net.juniper.space.device-management.devices+json;version=1
Cache-Control: no-cache

Response

{
    "devices": {
        "@uri": "/api/space/device-management/devices",
        "@size": "3",
        "device": [
            {
                "@href": "/api/space/device-management/devices/131087",
                "@uri": "/api/space/device-management/devices/131087",
                "@key": "131087",
                "deviceFamily": "junos-es",
                "OSVersion": "12.1X46-D10.2",
                "platform": "SRX210H-POE",
                "serialNumber": "XXXXX",
                "connectionStatus": "up",
                "ipAddr": "172.22.1.4",
                "managedStatus": "In Sync",
                "name": "MaverickSRX",
                "domain-id": 2,
                "domain-name": "Global"
            },
            {
                "@href": "/api/space/device-management/devices/131095",
                "@uri": "/api/space/device-management/devices/131095",
                "@key": "131095",
                "deviceFamily": "junos",
                "OSVersion": "14.1R2.12",
                "platform": "VMX",
                "connectionStatus": "up",
                "ipAddr": "172.22.1.151",
                "managedStatus": "In Sync",
                "name": "vMX1",
                "domain-id": 2,
                "domain-name": "Global"
            },
            {
                "@href": "/api/space/device-management/devices/131099",
                "@uri": "/api/space/device-management/devices/131099",
                "@key": "131099",
                "deviceFamily": "junos",
                "OSVersion": "14.1R2.12",
                "platform": "VMX",
                "connectionStatus": "up",
                "ipAddr": "172.22.1.152",
                "managedStatus": "In Sync",
                "name": "vMX2",
                "domain-id": 2,
                "domain-name": "Global"
            }
        ]
    }
}

Tags and Filtering

The reason I’m using Junos Space is because I want to be scalable. I want to be able to apply a certain piece of configuration or I want to get a certain show command from a list of devices. That device list can change over time when I add more nodes to my network. So I need a flexible and scalable way of talking to a group of devices.  Within Junos Space I can specify one or more tags on a Device which I can then use to filter on in my REST request.

I can add Private and Public tags. Private tags are only visible for your username. Public tags are visible for everyone.

I’m going to add a private tag called: ‘RESTtest‘ to one of my devices.

Screenshot 2015-01-02 12.50.22Screenshot 2015-01-02 12.50.33

Now I can leverage this tag to filter on devices and find the right devices that I want a certain piece of information from. To filter I use a simple query string:  ?filter=(TAG eq ‘RESTtest’). When I want to filter on Public tags I query them using ?filter=(TAG eq ‘RESTtest:public‘). The documentation will explain in detail how tags can be used to filter and on which places in the API: Junos Space REST API Guide Tag Management.

Now I filter on the newly created tag and the output now shows just a single device.

Request

GET /api/space/device-management/devices?filter=(TAG eq 'RESTtest') HTTP/1.1
Host: maverickspace.maverick.local
Authorization: Basic XXXXXXX=
Cache-Control: no-cache

Response

<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<devices uri="/api/space/device-management/devices" size="1">
    <device href="/api/space/device-management/devices/131087" uri="/api/space/device-management/devices/131087" key="131087">
        <deviceFamily>junos-es</deviceFamily>
        <OSVersion>12.1X46-D10.2</OSVersion>
        <platform>SRX210H-POE</platform>
        <serialNumber>XXXXXX</serialNumber>
        <connectionStatus>up</connectionStatus>
        <ipAddr>172.22.1.4</ipAddr>
        <managedStatus>In Sync</managedStatus>
        <name>MaverickSRX</name>
        <domain-id>2</domain-id>
        <domain-name>Global</domain-name>
    </device>
</devices>

 

Summary

REST is a very powerful tool for all kinds of integrations that you want to build. Stay tuned for other blogs where I will use the Junos Space REST API to integrate with other networking products. Junos Space is a great tool for your Juniper devices to maintain and manage them, but most of all to have a central location where you can communicate with all Junos devices through REST and use tags to run commands on multiple devices without having to know the exact device details. It provides a great abstraction layer for your Junos based network.

Home Lab Server

Currently I’m doing a lot of testing at home on Network Virtualization solutions, like VMware NSX, Juniper Contrail, etc. Therefore I was stressing my current single home server quite a lot. Which is a custom build Xeon E3-1230 quad core with 32GB of RAM and 128GB SSD. I built this server according to the specifications found at: http://packetpushers.net/vmware-vcdx-lab-the-hardware/ . This has been a great investment as I’m running nested virtualization for both KVM and ESXi hypervisors and run the testing in there. Due to the fact that for a decent Network Virtualization (NV) set-up you need quite some memory, especially if you look at the memory utilisation of the NV Controller VMs, I had to expand my lab. I chose to extend it with an additional server so I would be physically redundant as well, making it easier to run upgrades on the physical machines.

Requirements

My requirements aren’t difficult as I mainly perform feature testing in my lab I don’t need a lot of CPU performance. There are no “Production” VMs running, everything is there to play around, so downtime is not a problem if necessary.
Other requirements:

  • Average CPU performance
  • Nested virtualization support
  • At least 32GB of RAM, preferably (or upgradable to) 64GB
  • 4 or more SATA3 connections (to grow to a VSA set-up)
  • 2 or more 1Gbps Ethernet NICs
  • Out of band management (IPMI)
  • Low power
  • Small footprint
  • Low noise

Especially the last 2 requirements are important to me. I run the lab on a shelf in a large closet, so I barely want to hear fans and I want to keep the footprint small to make sure I can expand the lab further, without having to sacrifice another shelf.

Bill of materials

The bill of materials was as follows. I will explain the reasoning behind each component in detail. You can click the SKU to purchase this item on Amazon.

Description SKU
Motherboard ASRock C2750D4I
Memory (4x) Kingston DDR3 1600Mhz 8GB non-ECC KVR16N11H/8
Storage Samsung 850 Pro SSD (128GB) MZ-7KE128BW
Case LC Power LC-1410mi
Fan Noctua NF-R8 PWM

I ordered this bill of materials at the Dutch webshop Azerty.nl, take a look at the screenshot below to find the exact part numbers.

HomeServerKit

Processor

I was looking at various options for a good home lab CPU. I first looked at an Apple Mac Mini, it’s a powerful processor, low power and footprint, but the 16GB limit of the system was no option for me. The same goes for the Intel NUC boards. I continued the search for a decent multi-core mini-ITX motherboard that could hold a lot of memory. Going for a Xeon was the only option to give me the option over 32GB. Until I found the Intel Atom Avoton chip. This next generation of Intel Atom processors is a very interesting one for home lab servers. You will also find that the latest generation of Synology NAS systems also runs on this same processor. This chip features a 4-core or 8-core processor which, when looking at single core benchmarks, is not the fastest ever, but the multi-core performance definitely makes up a lot! Especially in highly virtualised environments the multi-core architecture is used very well. I looked at various benchmark tests (http://www.servethehome.com/Server-detail/intel-atom-c2750-8-core-avoton-rangeley-benchmarks-fast-power/) and found that this CPU would give me more than enough CPU performance for the tests that I’m running in my home lab while still be very quiet and low power. The performance averages to about half of the performance that my existing set-up with Xeon E3-1200 V3 would give me.
Then feature wise this CPU gives you everything you want for a virtualization lab, which is:

  • 64-bit CPU
  • Supports VT
  • Supportes nested virtualization

The next best thing was that the CPU only comes soldered to the motherboard and can be passively cooled! Which brings us to the next topic regarding the motherboard.

Motherboard

There are 2 good options for a mini-ITX motherboard that features the Intel Avoton C2750. I only looked at the 8-core model, which is quite a bit more expensive, but will give you double the CPU power (especially in VM environments). There is also a C2758 model available, which does not feature TurboBoost. Which I thought would be beneficial in my lab as I need as much performance as I can. The C2758 features QuickAssist which is used for integrating accelerators as used on embedded systems (like a NAS).

I narrowed my choice down to 2 mini-ITX motherboards I found that also feature all my other requirements for networking and out of band management.

Supermicro A1SAi-2750F (http://www.supermicro.co.uk/products/motherboard/Atom/X10/A1SAi-2750F.cfm)
+ This board features a passively cooled C2750 (some have a fan on board
+ 4x 1GE LAN on-board
Uses SO-DIMM modules (notebook memory)
Marvell NIC, requiring an additional driver in ESXi

I’m usually a fan of Supermicro as all my previous home servers had a Supermicro motherboard. Their support for ESXi is excellent and they have a decent price. The big downside of this board is the use of SO-DIMM modules, which are more expensive than the regular DDR3 DIMM’s.

ASRock C2750D4I (http://www.asrockrack.com/general/productdetail.asp?Model=C2750D4I)
+ Passively cooled C2750
+ Regular DDR3 DIMMs
+ A ton of SATA ports to have the option to build a storage heavy server in the future
+ Intel NICs that have their driver built-in to ESXi 5.5 update 2
Only 2x 1GE LAN on-board

I chose the ASRock based on the many benefits this board has over the Supermicro. It was cheaper, supported cheaper DIMMs and didn’t require an additional driver installed in ESXi 5.5 update 2, making upgrades easier. The many SATA ports on the system make it an excellent board to grow to a VSA appliance in the future when required.

NOTE! Even though the CPU is passively cooled, the board requires you to connect a fan to CPU_FAN1 otherwise the board will not power up.

C2750D4I

Memory

I first tried to find 16GB DIMMs that were affordable. Unfortunately a single 16GB DIMM is currently the cost of 4x8GB DIMMs. My choice was therefore brought back to getting 4x 8GB DIMMs to get a total of 32GB memory in my next server.
I cannot stress enough that you should purchase from the Memory QVL that your motherboard supplier publishes. Any other DIMM may work, may work unstable or may not work at all. Fortunately a couple of Kingston’s affordable line of memory was tested by ASRock, so I didn’t look further and got those. The server is rock solid on these Kingston DIMMs already running for weeks.

Current Memory QVL for the ASRock C2750D4I

http://www.asrockrack.com/general/productdetail.asp?Model=C2750D4I#Memory%20QVL

I omitted the 4GB DIMMs from the table below as I need at least 32GB of RAM in the server.

Type Speed DIMM Size Vender Module
DDR3 1600 non-ECC 8GB ADATA AD3U1600C8G11-B
DDR3 1600 ECC 8GB Apacer 78.C1GER.AT30C
DDR3 1600 ECC 8GB Crucial CT102472BD160B.18FED
DDR3 1600 ECC 8GB innodisk M3C0-8GSS3LPC
DDR3 1600 non-ECC 8GB Kingston KVR16N11H/8
DDR3 1333 ECC 16GB Memphis IMM2G72D3DUD8AG-B15E
DDR3 1333 non-ECC 16GB Memphis IMM2G64D3DUD8AG-B15E
DDR3 1333 ECC 16GB Memphis IMM2G72D3LDUD8AG-B15E

If you would like to purchase the 16GB DIMMs mentioned on this Memory QVL please contact ASRock Sales  ([email protected]) for a quote. They sell and ship these Memphis DIMMs worldwide.


Storage

Synology-DiskStation-DS713+
I’m currently happily using a Synology DS713+ 2-bay NAS with 2 Western Digital RED 3TB disks for over a year. It is my primary source of shared storage for everything. Including all of the VMs. Therefore I don’t need a ton of storage in my server. I may want to play around with VMware VSAN or other VSA options, but for now I’ll keep everything stored on NFS shares on my Synology DS713+. The disks in the NAS are mirrored, meaning I only get IOPS from a single disk. While running 20-30 VM’s on this NAS, I notice that performance is going down. Therefore I chose to use a small 128GB SSD in both of my servers and use VMware Flash Read Cache. This technology lives inside ESXi will cache all the Disk Reads performed in a VM (when enabled for Flash Read Cache) and will also use the SSD as SWAP file instead of a file located in the folder where the VM is stored. This enhances performance a lot in my lab as my VMs are not storage heavy and usually consist of system files of the OS and some database files. When they are first read from the disk they are stored in my server’s SSD and especially Windows and Linux VMs benefit a lot from this!

In the screenshots below you can see my current usage of the Flash Read Cache.

FlashCache2FlashCache1

Case and cooling

The case I chose is not a very exciting one. It’s much bigger than required for a mini-ITX board, but I wanted a low profile case that would fit on top of my existing server and had a decent power supply built in (for cost savings). The case I chose ended up being a great one, as it features a fan that sucks air in to the chassis through the power supply.

As mentioned before the CPU is passively cooled, but as it’s being heavily used in ESXi running 10 or so VMs at any time. I needed additional cooling. In my existing server I’m using Noctua fans as they are amazingly quiet and perform very well! I chose a simple 8mm fan and mounted it right above the CPU heat sink to suck air out of the chassis again. This way it creates a great airflow through the case and my temperatures are very low for a server that is constantly using quite some CPU resources, while still being almost completely silent.

ServerInternals

ESXi and vMotion

As the internal SSD is used only for caching I installed ESXi 5.5 update 2 on a USB key. As no other storage is required to run ESXi, the USB key works fine, as long as logging can be exported to an external datastore (NFS for example).
When I added the new server to the existing ESXi cluster there is of course a big gap in CPU generation and features, as the other server is a Xeon. VMware has a great feature called “EVC”. This feature makes sure that you can limit the CPU features a VM uses so that it’s compatible with all different generations of CPU’s in your cluster while still utilise Live/Hot VM Migration (vMotion). The Avoton CPU has features that are equal to the “Intel Westmere” generation of CPU’s. This means that changing the EVC setting to “Westmere” enabled live vMotion for all my VM’s in my “Physical” cluster.

EVCmode

Summary

As I have DRS enabled in my cluster the system automatically moves load between the different servers. When I’m working on my lab set-up now I have to say that the only way to see which CPU I’m running on is at boot-up of a VM. The Xeon processor has more power and that’s noticeable during the boot-up. After the system is running it’s very hard for me to tell which CPU it’s running on and that is exactly what I wanted to achieve. The Avoton CPU is an amazing system as a home lab when you are purely feature testing. Again this is not a performance beast and you should not run any tests that should compare to a production system. This system is meant for playing around with many different features.

Currently both of my hosts are again running at 80-85% memory utilisation, so it’s heavily used and I couldn’t be happier with it.

If you have any questions on my set-up please comment below!

HomeServerRoom

The Double Switch

It’s been a very long time since I last wrote anything on my blog. Since that last post, so many things have changed that it’s really time to shed some light on these things. A lot of people have been asking me regarding my recent job changes, so I figured it was time to clearify this a bit.

Every year on New Year’s Eve I set a number of goals for the next year, these are usually business related and not to quit smoking since I never did that in the first place 🙂

This year I set the goal of finding the next challenge in my career, by the end of last year I moved to a pre sales role at a VAR. Since I’ve been working there for a little over 5 years, the role actually didn’t change much about my daily job. I was still supporting a lot of the post sales activities and was involved in support cases, while my role was meant to be something really different. As I was also the only technical guy in the team with Service Provider and Data Center focus I figured it was time for a change. I decided that I wanted to find an exciting new challenge outside of Telindus that gave me a surrounding of equal technical people driven to find the best solution for their customer. I also wanted to be more close to the “source” in my field, which is the reason I started looking at roles at vendors.

Cisco

I found a great role inside Cisco’s Global Enterprise Theater. This team consists of highly skilled sales and technical people around the world serving the worlds largest global enterprises. I got accepted into a team serving only 1 (very big) client. My colleagues were great and I traveled a lot throughout Europe talking about data center solutions. Although I never had the “awesome” feeling. For some reason this job never “clicked”.

Juniper

On my second day at Cisco I got a call from someone I know inside Juniper. He happily told me that he was the SE manager for a newly formed team inside the EMEA North organization of Juniper. He felt that I was a perfect fit for the role as it consisted of hosting/cloud providers, local ISPs and large enterprise data center customers. Of course I just started at Cisco so I thanked him for his interest. After months he contacted me again that they were not able to find a suited candidate for the role. After a long thought I agreed to meet with the team. As soon as I walked into the Juniper office I somehow felt home. I met everyone on the team and had interviews for 6 hours straight. My manager told me that I would hear from him about the outcome of the day. When I was driving home he already called me and told they were going to send me an offer as soon as possible. That was an amazing feeling of course. I immediately felt connected with the team and was so excited!

First few weeks

I’m writing this blog after being at Juniper now for a few weeks and I have to say it’s one of the best experiences ever. It’s exactly what I would’ve thought of it! I’m dealing with great projects at major customers. I’m developing a demo setup on about every new piece of technology both in software and products Juniper has and I’m dealing with a super flexible organization that requires some self learning, but also gives you so much freedom in choosing the work that you pursue as long as your customers benefit. I’m currently typing this on a flight to San Francisco for my new hire training in Sunnyvale and I’m so excited to meet a lot of the smart people that I’m already in touch with discussing solutions for my customers.

Quick

Moving away from Cisco after only a few months was a really hard decision. It really turns out to be one of the best decisions of my career. The reason I still decided to do it, was that I followed my heart. “It somehow already knows what you want to become, have the courage to follow it” is one of Steve Jobs’s famous quotes and that is exactly what I did in this case.

Blog

Apart from a brand new design and hosting the blog again. Be prepared for finally some technical blogging again! It’s been ages since I last did that, but in my current role I’m able to work on solutions that translate very well into blogs. I’ll be blogging on exciting Data Center evolutions, features and technologies that I’m working on. That of course will have a ton of SDN, NFV and DC Fabric goodness in it!

Stay tuned!

« Older posts Newer posts »

© 2024 Rick Mur

Theme by Anders NorenUp ↑