At various points at the VM Workshop today, I heard people talk about their mentors or people that impacted them early in their careers. This brought back various memories for me. One of which was that of Paul Van Leer. Paul had been doing performance analysis ever since dirt was invented, or at least it seemed that way. In the very early 90s, the consolidation of VM work in Kingston to Endicott began. At the time, Paul was the IBM rep to the SHARE VM Performance Project. And my manager at the time, Eric Strom, told me I'd be the new rep from Endicott for the project. It was a turning point in my career and my appreciation for the product and community.
Paul cared a great deal about the product and the customers. He was going to do all he could to make sure I'd be a suitable IBM rep. At times he was like a Drill Sergeant; and other times like a father teaching a son to tie his shoes. Three particular stories come to mind.
Management thought it would be good for me to go to Kingston and spend time with Paul. Just going in his office was a bit intimidating. I'm not sure if it was the awards and such on the walls, the cloud of smoke in the room (you could smoke in your office in those days), or the combination. I remember he started out with telling me the disciplines of performance fell into: measurement, monitoring, analysis, design, and modeling. He paused and asked me if I understood. I thought a little before answering and said, "Yes, I think I've been exposed to all of that, but probably not to the depth you have." I was thinking that was a good answer I gave. Then suddenly I was hit with the mental equivalent of a Vulcan nerve pinch, when he smiled and said "Well, let me explain how you really don't know anything." Paul then spent the rest of the day showing me what it really was to have depth of practical knowledge. We reviewed various case studies and pages and pages of performance information. I left his office humbled and changed for the better.
The next story came a few months later when I did one of my first presentations at an IBM Specialist Update. I think this was in Atlanta, because I remember sweating a lot which didn't help the nerves (Remember this was back in the days of suits). I made it through my presentation okay. Later in the day I was walking to a session and passed a small vending area. There, sitting at one of the tables, was Paul. He motioned me over. And then proceeded to give me several pointers and corrections for my presentation. Just when I started to think that maybe I wasn't cut out to be a speaker, Paul smiled and told me that I'll do fine, I just needed a little more practice.
The third story happened at a SHARE conference which was to be the cross-over conference for us. Paul's last as IBM rep to the project and my first. Somehow, there was a mess up and the registration folks wouldn't give me an IBM ribbon for my badge. At one of the planning meetings, someone noticed that I was missing a ribbon. Paul was sitting next to me. He took out a pocket knife and unfolded it. My eyes got bigger as I tried to think of what I might have done wrong. Holding his own ribbon tightly, he slice it down the middle and handed me half of it. And then, I knew that everything was going to be okay.
Paul would tell me, "Remember, your job is to represent IBM to the Customer, and the Customer to IBM". That has stuck with me. Rarely does a conference go by that those words don't echo in my mind.
As the IBM z/VM product approaches its 40th Anniversary on August 2, 2012, the author takes time to reflect on the product and his experiences.
Friday, June 29, 2012
Thursday, June 28, 2012
Seek First to Understand
Habit number 5, from Stephen Covey's The Seven Habits of Highly Effective People was to "Seek first to understand. Then to be understood." It actually just struck me how this relates to a change in my learning style and a z/VM development core.
Coming out of college, I was still some what stuck in that mode of learning:
But the point here is they changed my learning process. They forced me to know why we did things when I wanted to just focus on the procedure. They took me from being able to type in a list of commands to knowing what those commands were and what they did. In ways, it was as if I was learning a new language. Just because I can count to ten and name the days of the week in Spanish, doesn't mean I can speak Spanish. So it was that my mentors forced me past memorization of VM commands and into an understanding of virtualization and the underlying architecture.
I see this with my current peers. For example, when they train new people on z/VM memory management. They don't give them a list of subroutines to memorize; they tell them to go read and understand Chapter 3 of the z/Architecture Principle of Operations. Probably just what their mentors told them. I smile thinking about the generations of people involved in z/VM and the approach of understanding the system first in order to developed on it to the fullest. There's a sense of craftsmanship in all of this.
This craftsmanship has another dimension, that of understanding how things are used. The better moments in VM history are where VM Development collaborated and accepted direction from customers and vendors to influence the design and implementation of new functions. This all takes time, but I would argue that it is worth it.
Combining the two dimensions is magical. What's better than a developer who first knows the potential of hardware and architecture and combines it with knowledge of the customer's needs and business to produce something that brings true value? Nothing is better. Well, nothing except 40 years of developers doing that.
Coming out of college, I was still some what stuck in that mode of learning:
- Memorize facts or procedures, so you can...
- Regurgitate those facts, so you can...
- Pass and move on to the next class
But the point here is they changed my learning process. They forced me to know why we did things when I wanted to just focus on the procedure. They took me from being able to type in a list of commands to knowing what those commands were and what they did. In ways, it was as if I was learning a new language. Just because I can count to ten and name the days of the week in Spanish, doesn't mean I can speak Spanish. So it was that my mentors forced me past memorization of VM commands and into an understanding of virtualization and the underlying architecture.
I see this with my current peers. For example, when they train new people on z/VM memory management. They don't give them a list of subroutines to memorize; they tell them to go read and understand Chapter 3 of the z/Architecture Principle of Operations. Probably just what their mentors told them. I smile thinking about the generations of people involved in z/VM and the approach of understanding the system first in order to developed on it to the fullest. There's a sense of craftsmanship in all of this.
This craftsmanship has another dimension, that of understanding how things are used. The better moments in VM history are where VM Development collaborated and accepted direction from customers and vendors to influence the design and implementation of new functions. This all takes time, but I would argue that it is worth it.
Combining the two dimensions is magical. What's better than a developer who first knows the potential of hardware and architecture and combines it with knowledge of the customer's needs and business to produce something that brings true value? Nothing is better. Well, nothing except 40 years of developers doing that.
Wednesday, June 27, 2012
My First VM Workshop
Anyone who has been in the VM Community for a significant portion of time has probably heard of the Original VM Workshops. These summer events held on university campuses across the United States and Canada were more like reunions than they were conferences, and always brought out the child in each of us. I was glad to see the rebirth of the Workshop last year at The Ohio State in Columbus.
Attendance at the original VM workshops wasn't as strong by IBMers as it was customers and vendors. Remember, those were the days of customer interaction involving dark suits and white shirts. However, shortly after Casual Friday was invented, I was able to go to my first VM Workshop at Notre Dame.
The flight to South Bend wasn't all that long. However, it felt like one of those moments after a very long flight where you land in a strange land, jet lagged, and and feeling like you've been immersed into a very different culture. People are wearing shorts and sandals and ugly Hawaiian shirts. I don't mean just Hawaiian shirts, but the bold and the ugly kind. The ones that make you feel like Mom is about to tell you, "It's not polite to stare."
The sight that really floored me was of the VM product owner at the time, Tim Metcalf. There were often 'fun' things going on at the Workshop. At Notre Dame, someone had brought a unicycle with training wheels. And someone else had convinced Tim that he should try to ride it. For those that don't know Tim, he was lineman in football in his younger days. I'm not sure there has been anything else that I've seen in my career which told me it was ok to relax with our customers more than seeing this man of a certain stature and authority (I think he was my third or fourth line manager at the time) attempting to ride a unicycle. I appreciated many things about Tim's management of the product in the 1990s, but my memory always returns to this.
Unfortunately, I wasn't able to stay for the entire Workshop as I needed to be in Europe for another event, which was exciting, but didn't have unicycles.
Another tradition of the Workshops was the T-shirt for the event. The Notre Dame one continues to be one I treasure for it's unique blend of creativeness and theme. You might not be able to see it well in the pic, but those are the four teddy bears (aka four horsemen).
Today I left at 6am, on my way to Univeristy of Kentucky, and the 2012 VM Workshop. Will I see you there?
Attendance at the original VM workshops wasn't as strong by IBMers as it was customers and vendors. Remember, those were the days of customer interaction involving dark suits and white shirts. However, shortly after Casual Friday was invented, I was able to go to my first VM Workshop at Notre Dame.
The flight to South Bend wasn't all that long. However, it felt like one of those moments after a very long flight where you land in a strange land, jet lagged, and and feeling like you've been immersed into a very different culture. People are wearing shorts and sandals and ugly Hawaiian shirts. I don't mean just Hawaiian shirts, but the bold and the ugly kind. The ones that make you feel like Mom is about to tell you, "It's not polite to stare."
Tim Metcalf, mastering a unicycle. |
Unfortunately, I wasn't able to stay for the entire Workshop as I needed to be in Europe for another event, which was exciting, but didn't have unicycles.
I look good here, but the ride didn't last long. |
Today I left at 6am, on my way to Univeristy of Kentucky, and the 2012 VM Workshop. Will I see you there?
Tuesday, June 26, 2012
Recursive Virtualization
What do you do if you have a lot of time on your hands? Well if you're a VM tester, you try to see how many levels of VM can you run. That is, running VM in a virtual machine on a VM that is running on a virtual machine, etc. etc.. The record that I've hear most often quoted is nine by an IBM system tester in the 1990s. Who's face I can picture, but for the life of me I am drawing a blank on a name. Perhaps someone out there will provide a name. He did this with a VM/ESA system. Nine isn't necessarily a limit from a technology perspective, but more from a sanity perspective. As I recall the ninth one wasn't setting any performance records. Also, it takes some patience dealing with keeping consoles and other things straight as to which goes to which system. I was amazed when I heard of this because there are various times I have struggled to just sort out 2nd and 3rd level systems. How many other people out there have formatted a 191 disk on the wrong level? Or issued Shutdown at wrong level? Ok, perhaps it's just me.
What's the point to running that many levels? From a business perspective, there really isn't one. It's a curious academic experiment. It may have turned up some unique bugs; but beyond that, it's just bragging rights. (Though, since I my poor brain can't even recall the chap's name, his name isn't even preserved in this Blog).
At smaller number of levels, there is business value. Whether it be for testing, training, or hosting purposes, second level systems can be very helpful and valuable. One of the things I often encourage new z/VM people to do is to build a second level system. It gives you an opportunity to try various commands and features in a worry free environment. And it's a very realistic model of a "real" z/VM system, with just a small number of LPAR or CEC features that are not virtualized up through a second level system.
I believe the ability to do recursive virtualization is another apsect of the purity of the z/VM virtualizaton model. It's not a be-all, end-all measure; but it is another view worth considering.
What's the point to running that many levels? From a business perspective, there really isn't one. It's a curious academic experiment. It may have turned up some unique bugs; but beyond that, it's just bragging rights. (Though, since I my poor brain can't even recall the chap's name, his name isn't even preserved in this Blog).
At smaller number of levels, there is business value. Whether it be for testing, training, or hosting purposes, second level systems can be very helpful and valuable. One of the things I often encourage new z/VM people to do is to build a second level system. It gives you an opportunity to try various commands and features in a worry free environment. And it's a very realistic model of a "real" z/VM system, with just a small number of LPAR or CEC features that are not virtualized up through a second level system.
I believe the ability to do recursive virtualization is another apsect of the purity of the z/VM virtualizaton model. It's not a be-all, end-all measure; but it is another view worth considering.
Monday, June 25, 2012
Architecture Faithfulness and Fidelity
Many of you have heard me comment on this already, but I believe it's worth repeating, even more so as I look back over the decades. My colleague Brian Wade used the phrase z/VM's role is to "faithfully replicate the architecture". John Thomas of the the IBM competitive project office, referred to this as fidelity. This has been a core design principle of z/VM that it obeys the z/Architecture Principle of Operations as if it were a real machine. Many of the test suites that are run against the real hardware, or logical partitions, are also run in a z/VM virtual machine; and they need to run successfully for a new release to pass.
You might ask why is the architecture faithfulness or fidelity important? There are several key reasons. First, it's been helpful over the years to have this design principle or goal. Otherwise, you would see more drifting in purpose and reliability. This also means that since System z architecture evolves with a high degree of forward compatibility, z/VM also has a high degree of compatibility. While some would disagree, I claim that's an important value. How many of us have programs we wrote in the 1970s or 1980s that still run successful today on z/VM? We'll talk more about compatibility in another post.
While we have extensions to the architecture in z/VM, significant effort is placed to not compromising the architecture for the sake of those extensions.
There is also value in accurately representing the architecture from a trust and confidence perspective. Various software groups or vendors are comfortable that the virtualization layer that z/VM adds will not influence the behavior of their software in unforeseen ways. They still recognize the value of testing, but they don't feel the warranty risk or problem determination expensive will be out of proportion to the value gained. I believe this is one of the reasons various software is certified to run with the virtualization of z/VM, but not all the other virtualization solutions.
Have you ever met someone who didn't seem to stand for anything, or have any core principles? I have; and I've always had trouble trusting people like that. I like to think of z/VM as being trustworthy. Faithfulness to the architecture is one of its core principles that has served it over the decades, and hopefully will continue to be a way-point through the next decade's journey.
You might ask why is the architecture faithfulness or fidelity important? There are several key reasons. First, it's been helpful over the years to have this design principle or goal. Otherwise, you would see more drifting in purpose and reliability. This also means that since System z architecture evolves with a high degree of forward compatibility, z/VM also has a high degree of compatibility. While some would disagree, I claim that's an important value. How many of us have programs we wrote in the 1970s or 1980s that still run successful today on z/VM? We'll talk more about compatibility in another post.
While we have extensions to the architecture in z/VM, significant effort is placed to not compromising the architecture for the sake of those extensions.
There is also value in accurately representing the architecture from a trust and confidence perspective. Various software groups or vendors are comfortable that the virtualization layer that z/VM adds will not influence the behavior of their software in unforeseen ways. They still recognize the value of testing, but they don't feel the warranty risk or problem determination expensive will be out of proportion to the value gained. I believe this is one of the reasons various software is certified to run with the virtualization of z/VM, but not all the other virtualization solutions.
Have you ever met someone who didn't seem to stand for anything, or have any core principles? I have; and I've always had trouble trusting people like that. I like to think of z/VM as being trustworthy. Faithfulness to the architecture is one of its core principles that has served it over the decades, and hopefully will continue to be a way-point through the next decade's journey.
Friday, June 22, 2012
What's a VM?
Yesterday I discussed the VM/ESA Handbooks and how I had a chapter in one of them. I confess it was a very proud moment when I got a copy of the book. I remember sharing the news with friends from church, not that they were really into VM/ESA, or even computers in general. One of my friends, a young lady, in support of me picked up the book and declared that she would read my chapter, which started with:
We have lots of ways to explain VM these days. There's even whole presentations devoted to it. A virtual machine is an amazing thing and future posts will look at some of the things I find most exciting about them. But in ways, the amazement doesn't overwhelm until you learn about how it's used.
The analogy that comes to mind is from back in grade school when we learned about simple machines: lever, pulley, wheel, inclined plane, wedge, and screw. While interesting as individual items, the amazement grows exponentially when they are combined and enhanced. So with this model, I sometimes take a different approach and describe the complex first. Point out that their bank ATM transactions are validated on software in a virtual machine. The website they are surfing on runs in a virtual machine. That purchase they just made is getting routed through virtual machines. The processing of records for their court appearance next month will be with virtual machines. The paycheck they're cashing to pay for the court fee was connected to a virtual machine. In the end, they may not have a better understanding of what VM is; but hopefully it will help them understand the value of VM.
I don't want anyone thinking I'm calling VM simple. And simple should not be confused with unimportant. Take out all of the simple machines inside a Ferrari Enzo and it's not so impressive. Take VM out of enterprise solutions, and they won't be impressive either.
In case you were wondering, my friend never finished the chapter, but she still thought it was cool that I worked on something that had books devoted to it and that I was part of that. I have to agree.
VM is an ideal platform for understanding application performance.Looking up from the book, she asked, "Ok, so what's a VM?". Well, now isn't that question at the heart of things in these reflections? I admit that at the time, I really struggled to answer her question. I was handicapped by not having a white board to use; and I wasn't quick enough to think to say, "You will need to read Chapter 1 for that." I believe her background was something like early childhood development, not electrical engineering. (Though, early childhood development could come in handy when dealing with unreasonable co-workers or customers).
We have lots of ways to explain VM these days. There's even whole presentations devoted to it. A virtual machine is an amazing thing and future posts will look at some of the things I find most exciting about them. But in ways, the amazement doesn't overwhelm until you learn about how it's used.
The analogy that comes to mind is from back in grade school when we learned about simple machines: lever, pulley, wheel, inclined plane, wedge, and screw. While interesting as individual items, the amazement grows exponentially when they are combined and enhanced. So with this model, I sometimes take a different approach and describe the complex first. Point out that their bank ATM transactions are validated on software in a virtual machine. The website they are surfing on runs in a virtual machine. That purchase they just made is getting routed through virtual machines. The processing of records for their court appearance next month will be with virtual machines. The paycheck they're cashing to pay for the court fee was connected to a virtual machine. In the end, they may not have a better understanding of what VM is; but hopefully it will help them understand the value of VM.
I don't want anyone thinking I'm calling VM simple. And simple should not be confused with unimportant. Take out all of the simple machines inside a Ferrari Enzo and it's not so impressive. Take VM out of enterprise solutions, and they won't be impressive either.
In case you were wondering, my friend never finished the chapter, but she still thought it was cool that I worked on something that had books devoted to it and that I was part of that. I have to agree.
Thursday, June 21, 2012
The Class of 1994
How many of you remember the VM/ESA handbooks from the J. Ranade IBM Series championed by Gabe Goldberg and Phil Smith? There was The VM/ESA Systems Handbook and The VM/ESA User's and Applications Handbook. I was thrilled to have been asked to contribute a chapter in one of them: CMS Application Performance. The experience was amazing. My first taste of writing was a humbling and rewarding experience. Who would have thought stringing a bunch of words together would be so difficult.
As I look back over the contributors, I see so many wonderful people. Those of you that have copies, dig them out. It's almost as much fun as pulling out one's high school yearbook. The Class of 1994 (Phil or Gabe will correct me if I have the year wrong). Oh, look, there's Mark Cathcart with his chapter: Wide-Open VM. While DCE and POSIX aren't z/VM bread and butter these days, Mark's words portray how much interoperability there has been with the VM products over the years, and how it continues to be important.
Like Mark's chapter, many others are out of date when it comes to the specific technology, but almost all retain elements of the core of z/VM. The keys to writing good server virtual machines remain true today. The importance of delineating accounting and security functions didn't disappear. And I dare say some of the collaboration software programmers of today would benefit from reading the chapter on Exploiting OffceVision/VM.
So hurry, go find your copy of the handbooks. Grab a fresh cup of coffee and enjoy.
I had to search on-line for the books as well. And yes, I clicked this:
As I look back over the contributors, I see so many wonderful people. Those of you that have copies, dig them out. It's almost as much fun as pulling out one's high school yearbook. The Class of 1994 (Phil or Gabe will correct me if I have the year wrong). Oh, look, there's Mark Cathcart with his chapter: Wide-Open VM. While DCE and POSIX aren't z/VM bread and butter these days, Mark's words portray how much interoperability there has been with the VM products over the years, and how it continues to be important.
Like Mark's chapter, many others are out of date when it comes to the specific technology, but almost all retain elements of the core of z/VM. The keys to writing good server virtual machines remain true today. The importance of delineating accounting and security functions didn't disappear. And I dare say some of the collaboration software programmers of today would benefit from reading the chapter on Exploiting OffceVision/VM.
So hurry, go find your copy of the handbooks. Grab a fresh cup of coffee and enjoy.
I had to search on-line for the books as well. And yes, I clicked this:
Wednesday, June 20, 2012
Big New World
In 1985, when I started with IBM and VM (VM/SP at the time), I was in awe with the size of the world and the size of IBM. This was a shock for a kid who grew up on a street called Cowpath Road. I believe IBM was around 405,000 employees world wide in 1985. Being on the same "team" with that many people was something I don't think I comprehended right away. I was still dealing with just the population of IBM Endicott at the time. Depending on how you defined it, IBM Endicott included the Glendale, River Plaza, North Street, and Century Plaza locations. I don't recall the total Endicott population in 1985, but it was easily over 12,000. The main products in that era were printers, processors, programming, and the check processing machines.
What was really special to me was no matter what question I had, there was someone on the site that could give me the answer. While there was a nice library on campus, I just always found it easier to ask someone. In some ways, this isn't all that different than googling for an answer today. Only instead of clicking, it would start by spinning around in your office chair to ask your office mate, "Hey, do you know who can tell me about the 3330 DASD?". Very seldom did I have to visit more than two or three people to get my answer. Like the Internet, I suppose I needed to be cautious about the degree to which I trusted the answers I was given. However, my gut tells me those answers were more trustworthy than a random Google response today. I don't have data to prove it, it's just conjecture on my part. It seemed people were more careful then to delineate between how they knew something worked versus how they thought it should work.
What was really special to me was no matter what question I had, there was someone on the site that could give me the answer. While there was a nice library on campus, I just always found it easier to ask someone. In some ways, this isn't all that different than googling for an answer today. Only instead of clicking, it would start by spinning around in your office chair to ask your office mate, "Hey, do you know who can tell me about the 3330 DASD?". Very seldom did I have to visit more than two or three people to get my answer. Like the Internet, I suppose I needed to be cautious about the degree to which I trusted the answers I was given. However, my gut tells me those answers were more trustworthy than a random Google response today. I don't have data to prove it, it's just conjecture on my part. It seemed people were more careful then to delineate between how they knew something worked versus how they thought it should work.
Tuesday, June 19, 2012
We the people...
Several years ago, I was at a SHARE conference listening to my friend and co-worker John Franciscovich explain some useful things about CP (the control program) for z/VM. At one point in the presentation, he used the phrase - "Then we create the control blocks...". He paused, and said, "When I say we I really mean the CP code. It's not like we're little people running around inside the computer."
It struck me that a lot of z/VM developers talk that way, reflecting a degree of ownership of the design and code. Some of this comes from working on it and owning it for a length of time, as almost in a science fiction novel, the coder becomes the code.
While that may sound weird to others, it gives me a sense of confidence in the product and the people. Quality studies have talked about how quality changes when names are attached to work; cases where one signs one's name to a piece of work. It's old school craftsmanship. If you look close enough at z/VM code, you'll see names. Well, not really names, but programmer IDs. So in a sense, we do sign our code. My few lines of code are identified by "2B".
I don't believe this is just a current trend. If I look back historically at z/VM, I have trouble separating the product from the people, and the people from the product. And I think that is a good thing.
It struck me that a lot of z/VM developers talk that way, reflecting a degree of ownership of the design and code. Some of this comes from working on it and owning it for a length of time, as almost in a science fiction novel, the coder becomes the code.
While that may sound weird to others, it gives me a sense of confidence in the product and the people. Quality studies have talked about how quality changes when names are attached to work; cases where one signs one's name to a piece of work. It's old school craftsmanship. If you look close enough at z/VM code, you'll see names. Well, not really names, but programmer IDs. So in a sense, we do sign our code. My few lines of code are identified by "2B".
I don't believe this is just a current trend. If I look back historically at z/VM, I have trouble separating the product from the people, and the people from the product. And I think that is a good thing.
Subscribe to:
Posts (Atom)