Rick Brenner, Chaco Canyon Consulting
When we talk, listen, send or read emails, read or write memos, or when we leave or listen to voice mail messages, we’re communicating person-to-person. And whenever we communicate person-to-person, we take risks.
We risk being misunderstood, offending others, feeling hurt, and being confused. There are so many ways for things to go wrong that we could never learn how to fix all the problems after the damage is done – there’s just too much to know. And when things do go wrong, the personal and organizational costs can be unbearable. Careers can founder; new products can be too little too late; companies can fail.
A more effective approach avoids problems altogether, or at least minimizes their occurrence. If everyone in the group understands how interpersonal communications can fail, they can frame their communications to avoid problems.
Participants learn a model of interpersonal communications that can help them stay out of the ditches. Virginia Satir, a pioneering family therapist who applied systems thinking to the study of human relationships, originated the model. It provides a new understanding of how communications can go wrong and how to keep them right.
Understanding, though, is not enough. We must have access to what we know in the moment, when we’re deeply involved intellectually and emotionally. In those moments of intense involvement, we’re most likely to slip, and least likely to remember what we’ve learned. That’s why we use an interactive learning model in this Session. We emphasize communication under stress, where the most expensive failures occur. We’ll learn to appreciate that it’s far easier to avoid damage than to repair it once it’s done.
Of all the stressful situations we encounter at work, one of the most difficult is saying no to power. We’ll show how to apply the models and techniques we learn to that very tricky class of situations.
Mr. Brenner holds a Masters Degree in Electrical Engineering from MIT. His current interests focus on improving personal and organizational effectiveness in abnormal situations, such as dramatic change, enterprise emergencies, and high-pressure project situations. He has written a number of essays on these subjects, available at his Web site, http://www.ChacoCanyon.com/, and writes and edits a weekly email newsletter, Point Lookout.
F. Michael Dedolph, CSC
When the World Trade Center collapsed, the switching systems in the basement correctly diagnosed which lines were still working, and continued to connect calls using backup power for several days. One factor contributing to this remarkable product reliability was the AT&T / Bell Labs practice of early systems architecture reviews.
In this session, the speaker will:
SARB reviews provide an alternative approach to the SEI’s Architecture Tradeoff Analysis Method (ATAM) method. Compared to ATAM, SARB-style architecture reviews can be easily and flexibly tailored based on the context. The flexibility of the method makes it suitable for many kinds of systems and problem domains.
From 1997 to 2004, Michael was a systems architecture review leader at Bell Labs (Lucent); he also managed and taught Lucent’s Systems Architecture Introduction class. The SARB review process was developed over time with extensive consulting support from Jerry Weinberg. F. Michael Dedolph was a SARB review leader for 7 years at Lucent/ Bell Labs, and managed and taught Lucent’s Systems Architecture Introduction class. In addition to leading architecture review teams, he facilitated numerous risk identification and project retrospective sessions.
Prior to 1997, Michael worked in the Risk and Process programs at the SEI. While at the SEI, he was the technical lead for the teams that developed the SCE and CBA-IPI appraisal methods, and was the team leader for several Risk Reviews.
He started his IT career by spending 10 years as an Air Force computer officer.
Alan Koch, ASK Process Inc.
Managers and testers alike often describe testing as being all about finding bugs. This is a natural conclusion one might draw after observing the testing process. Look for bugs, report bugs, then ensure they get fixed. Pretty simple. But there is a nasty problem with this. As we all know, testing all the defects out of a product is impossible in most cases. And as any student of logic will affirm, even if we could find and dispatch every defect, it is logically impossible to prove that no more defects exist. (You can not prove the absence of something; only its presence.)
So if the objective of testing is to remove the defects from the product, we are virtually guaranteed to fail to some extent. Defects will persist. But testing has a higher objective. Yes, testing finds defects (and of course we fix the defects we know about), but finding those defects is a means to a different – although a related end. Every product may fail in some non-trivial way that would have adverse impacts on the users or the organization. This is a risk that needs to be managed, and testing enables us to manage that risk.
Michael Mah, QSM Associates Inc.
Strategic software developments – and failures – happen every day; Agile methods offer a major shift. But are they working? Drawing from industry statistics, Michael answers vital questions about Agile’s effectiveness, which may be turning the “law of software physics” upside down. Until now, there have been predictable relationships among schedule, staffing, and quality; industry data indicates Agile may be changing all this. See productivity findings at five Agile companies, and the results for time-to-market, productivity, and quality. Learn the right practices for your environment, including characteristics of successful measurement. See how metrics reveal insights into Agile approaches that are becoming mainstream.
Copyright PNSQC 2022
WordPress website created by Mozak Design - Portland, OR