About

Who is Captain KVM

Captain KVM is the alter ego of me, Jon Benedict.  I started my I.T. career out by building database queries on an IBM AS/400 all in support for an automated dialer system in a call center.  From there I moved on to building custom RISC-based servers at Avnet; performing hardware recovery, system integration work, and tool building at AOL; and Professional Services Consultant at Red Hat.  As a consultant, I provided short and long-term services to many Wall Street customers, federal agencies, and telco providers.

In 2009, I took my Red Hat and virtualization expertise to NetApp as a Technical Marketing Engineer, where I designed end-to-end solutions around KVM, RHEV, and RHEL-OSP. While at NetApp I maintained the technical side of the Red Hat/NetApp Alliance, and created this blog/online persona. I was also involved in the creation and design of the FlexPod data center solution, a highly popular converged infrastructure based around NetApp storage, Cisco networking, and Cisco UCS servers.

In 2014, I returned to Red Hat as Principal Solutions Architect, a pre-sales engineer role targeting the Cloud space in the South East. I work on pre-sales consulting and design for OpenStack & cloud management, and hopefully direct all of that to land on Red Hat technologies. In the spring of 2016, I took over the Tech Marketing for Red Hat Enterprise Virtualization (RHEV) in the Platform Business Unit of Red Hat.

I love writing and have been lucky enough to have been published in numerous industry online magazines and has written several white papers around KVM, Red Hat Enterprise Virtualization, and NetApp storage. I also enjoy speaking and have been selected to speak at numerous trade shows including Red Hat Summit, KVM Forum, oVirt Workshop, and OpenStack Summit.

@CaptainKVM

17 thoughts on “About”

  1. Hi Capt!
    I’m hoping you can assist me with a question involving RHEV-H and the NetApp HUK. I’ve found two documents that you wrote that do not reference the need for the kit, tr-3914 and tr-3940. Both mention using DP-MPIO to take care of the multipathing. The concern is nothing is mentioned about adjusting the timeouts (config_hba) on the HBAs. Do you know if host utilities needs to be or even can be installed on RHEV-H? If not, do the timeouts need to adjusted? Any help is appreciated.

    Agreed, Capt Caveman is way cool!

    1. Hi Mike,

      Thanks for reading and taking the time to leave a comment/question. And based on your comments, it looks like you read my TR’s for something other than a sleeping aid. 😉

      The short answer is that you can’t install any software on RHEV-H, but that’s part of the beauty – it requires minimal configuration & maintenance. If you find that you have a use case that requires the use of the NetApp HUK, or any kind of agent (backup, monitoring, etc), then you might want to look at using the “thick” hypervisor (RHEL 6). You can still manage it from RHEV-M as if it was RHEV-H. It’s not as thin as RHEV-H, but you can still strip it down to minimal packages. Check out my earlier post on mixing thick and thin hypervisors.

      I’ve not had any issues with the timeouts set by RHEV-H, but you may have a different experience based on your environment and/or application load.

      c.k./j.b.

  2. Hi Jon, congratulations for you blog, I care so much about the arguments that you treat.

    I’m trying to make oVirt works by several days, but can’t succeed in adding nfs storage, everything else seems to work fine (unfortunately I don’t own no iscsi nor fc). I don’t want to stole this space for my troubles, can I send you an email? Of course if this does not bother you.

    1. Hi Provino,

      What kind of errors are you getting when you try and mount the NFS? Here are some things to try:
      1. Mount the NFS export manually (not using oVirt). If you can do that, then make sure that the owner:group for everything in the NFS export is 36:36. For example, `mount nfs_server:/path/to/export /images`, then `chown -R 36:36 /images`. Change the export path and mount point to your own values.. after you change the ownership, unmount the NFS export and try again with oVirt.
      2. If manually mounting the NFS export hangs, check your iptables configuration. Be sure that the NFS client daemons are specified (LOCKD_TCPPORT=32803, STATD_PORT=662), then open those ports in your firewall. In RHEL and Fedora, you can specify those ports in /etc/sysconfig/nfs, then reboot the host.
      3. If the NFS export fails to mount because of permissions, check your permissions on the NFS server itself.

      Hope this helps!

      CaptainKVM

  3. Hello Captain

    I need help for solve my problem with KVM, I already install KVM in Centos 6.2, and I installed virtual guest with windows 2003 OS, some problem with network performance in windows guest it took long time to copy file under network and sometimes got hang, very slow performance, but no problem with linux guest OS this problem only in windows guest OS.
    for information :
    Machine R720 12 Core with 64GB RAM and 4TB HDD SCSI with RAID5.
    Running in CentOS 6.2 x86_64 kernel 2.6.32-220.13.1.el6.x86_64
    Using VirtIO driver and network bridging.
    Any idea for this problem.

    thank you
    Aswin

    1. Hi Aswin,

      Thanks for dropping by. I hear your particular complaint quite a lot – no issues with Linux guests, but slow Windows guests. The first thing to check is that the NIC drivers on the guest are up to date – in your case either virtio or e1000. If that doesn’t change anything, check out this link: http://www.linux-kvm.org/page/WindowsGuestDrivers/kvmnet/registry . It offers guidance around getting better throughput via editing the Windows registry. Essentially in Windows, larger packets will cause buffering and a large number of context switches – both will put a drag on your performance.

      Hope this helps,

      Captain KVM

      1. Hi Capt

        Thank you for your fast reply, I tried but still cannot solve the problem, but thank you, I will try with XEN virtualization for windows guest..

        Best Rgds
        Aswin

  4. Hi Capt,

    Last year,i move to virtualization technology using rhev kvm..all physical server move on virtualise..microsoft exchange 2007 also running on kvm.but this year,netapp storage we upgrade to 10tb.however,one of application support by netapp are Snap Manager for Exchange cannot install on micrsoft exchange vm..

    1. Khaizamis,

      Thanks for taking the time to read and comment. I am not an expert on SnapManager for Exchange (SME), but I believe what you are referring to was the result of an issue around a “.dll” file. I believe SME and Exchange both used the same dll file which caused a conflict. I heard that this has been resolved, but again, I’m not an expert on SME. Check out this document for best practices: http://www.netapp.com/templates/mediaView?m=tr-4033.pdf&cc=us&wid=141047043&mid=66605243 .

      Hope that helps,

      Captain KVM

  5. Hey, Captiain! Would love to talk with you about RHEV and NetApp, as well as get some more info on the alinean ROI calculator you used on one of your comparions articles. E-mail me and let’s discuss offline. Thanks!

  6. Hi Capt!

    recently, we have tested some specific High Energy Physics benchmarks that
    was running over S.L.6 and we observed that the performance over AMD
    machines was really bad, about 30-40% lost. We were investigating
    and we conclude that the problem was with some cpu flags. These cpu
    flags are important for running our application because it is optimazed
    for running with them. So, we tuning our KVM with the name of flags
    parameters and with “-cpu host” parameter too, but
    we observed that the flags that we needed, they weren’t included when
    we started the S.L.6 instance. We try to solve it, but when we conclude
    that the specific flags weren’t included in the code of the last versions of KVM
    and LibVirt(capabilities). Do you know something about this problem or
    you are aware of this?

    Thanks in advance,
    Víctor Fdez.

    1. Hi Victor,

      This isn’t something that I was aware of. Your best bet is to open 1 or more bugs against SL and KVM. Was the performance loss a comparison with Intel CPU’s? Or compared to bare-metal SL?

      Captain KVM

  7. Hi, I couldn’t find an email address for you to let you know that someone has defaced captainkvm.com, adding a link in between the captain kvm inage and the about/home tabs. The text is some slang for mail genitalia, with an external link.

Agree? Disagree? Something to add to the conversation?