OpenStack Heat Templates and OPNFV

Clearwater has been designed from the ground up for massively scalable deployment in the Cloud.  If you have a large-scale deployment, you don’t want to have to manage each instance manually – instead, you want an orchestrator.

Clearwater has already been integrated with many orchestrators, including

OpenStack includes an orchestrator of its own, called Heat.  Over the past month or so, we’ve built Heat templates to automatically create a network, a DNS server and a full Clearwater deployment, as shown using OpenStack’s network view below.

OpenStack Network

For more information on how to deploy Clearwater using Heat templates yourself, see our clearwater-heat repository.

One application of the Clearwater Heat templates is as a test application for OPNFV (the Open Platform for NFV).  OPNFV integrates a number of open-source projects, including OpenStack, and needs Virtualized Network Functions (VNFs) to confirm that the platform as a whole works.

Clearwater is present as the vIMS VNF, demonstrating that a full virtualized IMS core can be deployed and managed by OPNFV.  Alongside the vIMS VNF, there is also a live verification VNF (built on Clearwater’s existing live verification test scripts), which generates SNMP alarms according to whether or not the live verification passes – this enables regression testing.

OPNFV is an exciting project, and one we’re glad to be involved in!


Matt Williams is Lead Architect on Project Clearwater. Prior to Clearwater, he worked on Metaswitch’s Call Feature Server and Universal Media Gateway products. When not developing software, he enjoys running, snowboarding and windsurfing.

  1. Beny Nurmanda Reply
    i'm confused about private management network and private signalling network in clearwater.yaml for HEAT templates. should i differentiate the private management network IP from the private signalling network IP?
    • Matt Williams Reply
      Sorry for the delay - this message got caught by the spam filter and I only just spotted it! :( The HEAT templates require the management and signaling networks to be distinct, and I believe OpenStack requires that each distinct network has a separate IP address range. (Clearwater itself also supports being deployed with all traffic on the same network, or with separate networks with overlapping IP address ranges.) I hope that helps!
  2. Jaafar Reply
    Hello, thank you for these templates, but I have an issue with it. after launching the scripts I have the following errors: ERROR: Could not fetch remote template './network.yaml': Invalid URL scheme If you can help it would be great ! Thank you
    • Matt Williams Reply
      Which version of OpenStack are you using? I think earlier versions (Havana, maybe Icehouse) might not support paths like this. What happens if you try changing to just "network.yaml" (i.e. remove the "./")?
      • Jaafar Reply
        Thank you for your answer, but now its working ! I put the files on a web server and changed the pathname. Now I am having this issue: ERROR: mapping values are not allowed in this context in "", line 7, column 19 If you have a suggestion it would be great ! Thank you
        • Matt Williams Reply
          Which version of OpenStack are you running? ...and which file is it complaining about? (Unfortunately, my recollection is that OpenStack doesn't give very good debugging information on this.) BTW, it might be better to move this discussion onto the Project Clearwater mailing list (http://lists.projectclearwater.org/mailman/listinfo/clearwater_lists.projectclearwater.org).
          • Jaafar
            I have Openstack Mirantis and I am using heat client v0.2.8. The only info I have is that the error is in "" (I don't know what it means). I will move the discussion to the Clearwater mailing list.
  3. Jaafar Reply
    sorry it is in ... it didn't appear in the previous text
  4. Jaafar Reply
    in "BYTE STRING"
  5. Konstantin Reply
    Hi, folks ! Now it's easy to deploy Clearwater in OpenStack via Murano. You can find Clearwater murano package in the OpenStack Application Catalog : http://apps.openstack.org/#tab=murano-apps&asset=Clearwater%20vIMS It's more easy and flexible way than installation from Heat templates - you can provide all settings via Horizon dashboard and manage cluster configuration after deploy (scale out or scale in actions for Sprout/Bono/Homestead/Ralf )
    • Sergio Reply
      Hi Konstantin, thanks to point out to Murano Packages, i think it's very useful. I've problem trying to install Clearwater vIMS Murano Package on my Openstack (Mitaka): Trying to import the package (via repo, url or zip file) Murano trigger an error: Error: Package creation failed.Reason: "There is no item named 'manifest.yaml' in the archive" Could you please help me ? Thanks in advance Sergio
  6. Vinoy Mohan Reply
    Hi Matt, I have a doubt regarding the Clearwater deployment using the RedHat cloudform and not with Heat orchestrator .Can you please let me know if this is possible and how can we do that using cloudform product?
    • Matt Williams Reply
      Hi Vinoy, I don't believe we've tried deploying Clearwater using Red Hat CloudForms, but I can't see any reason why it wouldn't be possible. I see that Red Hat CloudForms advertises provisioning and life-cycle management - do you know what technology it uses for these?
  7. Nigno Reply
    I'm trying to run clearwater-heat on Openstack and I've a few question about the parameters: - what's the difference between ublic_mgmt_net_id and public_sig_net_id ? - Is it required to have to public networks ? Regards
    • Matt Williams Reply

      public_mgmt_net_id is the identity of the network that Clearwater should be attached to for management traffic (e.g. SSH, SNMP statistics and alarms, HTTP for provisioning) and public_sig_net_id is the identity of the network that Clearwater should be attached to for signaling traffic (e.g. SIP, Diameter). Note that public_mgmt_net_id and public_sig_net_id can be identical if that's useful.

      When we say "public" networks, this is as opposed to "private" networks that Clearwater nodes use internally to communicate with other Clearwater nodes. The "public" networks need not be exposed to the public Internet, but you will need to be able to route traffic to them from your PC to the "public" management network and from your phones to the "public" signaling network.

      I hope that helps. BTW, you might find the Clearwater mailing list an easier place to ask questions like this.

  8. vince Reply
    Hi, Very interesting material. Thanks! It would be very useful to have some performance measurement you achieved with this deployment for a double-check of the one on my machine. Would you have any link or data to share from that perspective? I couldn't find any data sheet about it on the web site. Thanks! Vince (Intel corp.)
    • Andrew Edmonds
      Andrew Edmonds Reply
      Hi Vince, we're glad you enjoyed our blog post. Any performance numbers are going to be highly dependent on the specifications of the hardware on the hardware that Clearwater is deployed on. On our physical hardware we've found that each Sprout node can handle 160k BHCA of VoLTE traffic and each Homestead node can handle 325k BHCA of VoLTE traffic. This causes Sprout/Homestead to use around 60% of their total CPU. However your numbers could be anything from around double this to half this depending on your hardware.
  9. Pingback: Interop challenge: global clouds rise again - OpenStack Superuser

  10. Pingback: More Reference NFV Architecture Based on TOSCA + NetConf YANG | Cloudify

  11. Pingback: No Sleep Till Vancouver - Pure-Play NFV, TOSCA & OpenStack Cloud Orchestration | Cloudify

  12. Pingback: No Sleep Till Vancouver - Pure-Play NFV, TOSCA & OpenStack Cloud Orchestration - cloudify

  13. Pingback: More Reference NFV Architecture Based on TOSCA + NetConf YANG - cloudify

Leave a Reply


captcha *