Well, that generated some comments. Interesting. People talk a lot about testing, but nothing gets ’em fired up like test results. I really appreciate the feedback, and I’d like to respond to it here, over a couple of posts. I don’t mind Blogger’s Compose feature but (as far as I know so far), it has a pretty clumsy method for editing comments. Like, none that I can see.
Sai Venkatakrishnan says:
I am happy that they are using continuous integration and deployment as well automated test suites. It is really healthy to integrate and deploy as soon as possible. But I will never try to use this as an alternative of manual testing i.e. Exploratory Testing.
Underestimating the value of human thinking and relying completely on automation is a mistake lot of people do. Automation can give you fast feedback but it can do only what you ask it to do. But people think, adapt and act and this is really important for testing. I am not sure when we will realize that and add it our mandatory routine of application development.
I think we have a chance of that when we understand that automation is not the goal; it’s a medium by which we achieve a goal. It has an effect: it can assist, extend, enhance, and accelerate testing, but that it isn’t testing. Something that has an effect is a medium, as McLuhan said. If we want truly to understand the medium of automation, we need to examine the other three effects that every medium has: every medium retrieves ideas from the past, from history, from literature, from mythology; every medium makes some previously prominent medium obsolete; and every medium, when taken beyond its limits (or “overheated”, as McLuhan said), reverses into the opposite of its extending/enhancing/accelerating effect. Cars extend our presence by getting us from place to place quickly; too many cars and we can’t get from here to there.
So automation, extending and intensifying our insight into some aspect of the product, helps for a while. And when automation overheats and reverses, we become blind, overwhelmed by the volume of what we have to analyze and maintain.
I don’t have the right answer to the question as to whether automation is overheated in a particular context. To me, it would seem responsible and competent technical work for the people involved to keep asking, and if the project community reaches consensus on the answer, then that’s the right answer for them. As a tester, part of my job is to draw attention to things that they might not have noticed; to be the headlights of the project. If what they see in the headlights is okay with them, that’s their perfectly legitimate choice.
Shrini Kulkarni asks
How does [my deciding that I was looking for bugs too early] compare with “conventional” wisdom that finding defects early in the life cycle is cheaper hence testing should be introduced early in the cycle? Are you saying that this conventional wisdom (or common sense) has changed its form?
I don’t think so. Finding bugs too early has a lot in common with creating test scripts too early; it’s dashing in and doing something before exploring the problem space properly. I made the freestyle explorer’s version of the scripted test designer’s mistake. Both mistakes incur opportunity cost; both are distractions from figuring out what’s important. Goal displacement, as you’d call it. I feel I should have got farther into the product and had a look around.
The advantage of the exploratory approach, I feel, is that I can recover quickly from this problem when I’m in control of my process. If someone or something else is controlling me, we’re dealing with a larger, more complex management system—and larger, more complex systems take more time to respond.
Kay Johansen remarks
We talked about continuous deployment today after Salt Lake Agile Roundtable, specifically about Flickr.com. We concluded that it might work for “discretionary” software but had doubts about “critical” or business software. So IMVU may be another example of where continuous deployment “works” because the bugs are not “important” to the users.
I’m delighted to hear that other people are questioning the issue, and I’m in very strong agreement with the last sentence. I want to emphasize that my test report was not an assessment on what’s right>; it was an attempt to tell a story of what I observed, and then some musings on the larger issue of what might be okay and what might not be. If the story of continuous deployment and the story of what I found are consistent with what the IMVU folks think is okay, that’s their call and they’re right to make it that way. In fact, as …
Elisabeth says
Sims 2 (I’m a recovering addict) has similar rendering weirdnesses with objects intersecting. I can attest that they didn’t interfere with game play and didn’t make the game any less fun. Actually, it made the game *more* fun when the rendering issues produced particularly amusing and anatomically impossible intersections.
Quite right. My stepson and I had lots of fun in one of the PlayStation hockey games, running the instant playback in very slow speed and seeing the glass shatter long before anyone banged into it. (I was far more interested in that than in the hockey part of the game.) Elisabeth raises a number of other important points. She goes on…
So I decided to try to find out if users enjoy IMVU. I discovered that searching on “IMVU love” didn’t bring back the – ahem – information I was looking for. So I tried “IMVU fansite”. Lots of people love this thing. People have even made fan videos.
I went looking for complaints about IMVU from users. I came up with one where the poster is complaining not about software bugs but rather that the whole thing is a waste of time, that he didn’t get as many credits as he expected, and that people are disrespectful. In other words, none of his issues had anything to do with software quality.
That could be true. On the other hand, systems of all kinds tend to have a certain kind of broad consistency. A more sophisticated look and feel might be related to a more sophisticated community. a) And maybe not. b) And either way, IMVU may be happy with the community it has.
I went for rough-and-ready metric-based data:
Results 1 – 10 of about 354 for “imvu sucks“. (0.09 seconds)
Results 1 – 10 of about 782 for “imvu rocks“. (0.11 seconds)
Clearly more detailed research is called for. 🙂 But more to come later, on the subject of “survivorship bias”.