What other KVM & NetApp demos would you like to see?

I’m happy to report that the demo on Offload VM Cloning from KVM to the Back-end Storage, pt2 is by far the most popular posting on my little blog since it’s inception late last year.

What other demos would you like to see? What other integration work would you like to see occur between Red Hat and NetApp? Are there other comparisons that you would like me to highlight?

I’d love to get your feedback!

Captain KVM

 

10 thoughts on “What other KVM & NetApp demos would you like to see?”

  1. Have you managed to get the simulator running under kvm/libvirt? If so, a demo on that would be great.

    Thanks for the blog.

    Trent

    1. Hi Trent,

      I haven’t tried it yet, but that’s a great idea! I may run into issues with the virtual NICs that are presented, as it will likely be looking for VMware-based devices, but I’ll give it whirl.

      thanks for the suggestion,

      Captain KVM

      1. Hi there,

        long time have passed since this topic was brought up… did you manage to get any results? This is a pretty hot topic for me as well, considering I’d like to implement ntap sim to openstack cloud (built on kvm). If you made any progress at all, I would appreciate info.

        Marek

        1. Hi Marek,

          If you’re referring to the NTAP simulator as a KVM guest, no, I haven’t been able to try it yet. I’ve been away from my lab for several months now (trade show season). However, this is something I really want to dig into this winter. I generally have longer stretches that I can use for lab time during the winter.

          Captain KVM

          1. Sounds good,
            could we stay in touch to sync the progress? We are busy with rebuilding our private cloud at the moment, but once it is up and running, porting ntap sim to kvm will become quite a hot topic for us.

            Marek

  2. I would love to see some clustering – namely RAC, MC/SG and/or RHEC.
    Some months ago, I had to bring up quickly a 3 node RAC (I actually managed the 2 nodes and the storage 🙂 ), for a POC for a customer’s support case:
    Host OS is Debian 6, the guests – 3 nodes (2 of which actually worked) – RHEL 5.5 and Openfiler for the iSCSI.
    My main issue was bridging the whole thing – I was stupid enough to use /etc/hosts instead of proper DNS and udev of the iSCSI, which was another fault on my side, since I rebooted the nodes before assigning UIDs to the disks for the ASM.
    I’m actually curious, if it is possible to setup/build virt image disks on the host OS, to share with the guests – clustering again 🙂
    Thanks again for the nice and useful info.
    Have a nice day and great weekend!
    Stoyan

  3. Hello again, Captain.
    I have another question:
    Recently, I came upon LXC – containers and I know for a fact, that they run on RHEL 6+
    Tried in the lab, but i needed something fast and did it with OpenVZ at the time, because I didn’t set up correctly the /root-s for the new containers and I gave up on LXC – 🙁 usually I don’t have much time in the office to play 🙁
    Can you do an article on LXC with either oVirt, or virt-manager on RHEL6x?
    Also, can you make some remarks on differences between br0 (standard ethernet bridge) and virbr0 (the new “virtual switch”)?
    Thank you in advance – have a great week 🙂
    Best regards,
    Stoyan

    1. Hi Stoyan,

      Good requests, although it may be a while before I get to cover Linux Containers. Virtual Machine Manager (VMM) under RHEL 6 and Fedora supports LXC’s, but I don’t believe oVirt or RHEV support LXC’s.

      Captain KVM

      1. Thank you.
        You see, my interest in this is mainly the OS-level virtualization, that up until now was best presented by Solaris Zones – deploying and testing there is awesome – fast, easy and if you break the app/software in the zone (which me, as a support engineer do really often 🙂 ) – big deal – i can easily deploy new one.
        A friend of mine – he writes code for cloud application, told me, that LXC has even more advantages – goes “deeper”, than the jails/chroot and may be faster. And with the “unified” kernel, now you can, for example set the annoying Oracle parameters on “hypervisor” level and go ahead and clone, clone, clone containers 🙂
        This is why, I actually put the “virbr0” question in the mix – if you can set a couple of virt switches with the clones, from above, the testing, proof-of-concepts, even some development work will become much easier.
        Not to mention storing the whole thing – LVMs, iSCSI, etc. – whichever comes handy – even NFS or CIFS – each instance is a directory tree, so you can go ahead and do whatever you want with it.
        All best,
        Stoyan

  4. An addition to the LXC topic.
    I manage to test these containers on Debian and it is a pain in the back – i had reconfigure the network bridge (I have KVM VMs on the same box), had to make the host to work as a semi-router (in order for the LXC to have “outside” network, you have to enable ip routing) and finally, I only manage to install, without issues, only Debian “guests”.
    When I tried to add CentOS/RHEL container (no matter where from i took the rootfs – from OpenVZ ready-to-use, or making it myself with a pre-build script), after 6 hours trying, I gave up – on this stage, I’ll be sticking with OpenVZ (when I can), KVM (just love it 🙂 ) and zones (I love Solaris, but this is not in this topic).
    I hope, you’ll have more luck, than me.
    Have great holidays!
    All best 🙂
    Stoyan

Agree? Disagree? Something to add to the conversation?