Mar 11 2006

An Intellectual Pursuit Continued with Appliance-based Solutions

Category: Intellectual PursuitsJoeGeeky @ 17:13

This is a continuation of a previous post and focuses on one of six tenets identified in the previous post.  In this post I will focus on service orientation.

Another trend expected to impact the systems of the Future, is the appliance approach to systems engineering. Today, capabilities are integrated into bigger and bigger machines, each with all the elements needed to operate on their own, such as monitors, keyboards, etc. This approach has resulted in the development of applications that are dependent upon an integrated architectural model Side effects caused by this approach have unwittingly led to a number of problems for many different types of users.

  • The creation of major points-of-failure within systems architectures
  • Herculean logistical efforts required to mobilize a capability
  • Overwhelming configuration and administration effort
  • Limited scalability within existing architectures
  • Tightly bound system-level dependencies
  • Limited ability to take only what is needed. No scales of economy with respect to establishing and later growing capabilities
  • Architectural limits for single system processors and memory constrain the potential of information processing efficiency

At some point the bigger-box approach begins to unravel, especially in mobile environments. In today’s littoral environment, users need the flexibility to mobilize a capability rapidly, with little logistical overhead, little support from expert users, and scale it as the situation allows or demands without carrying thousands of pounds of computing hardware.

Commercially, the industry has moved to appliance-based solutions and embraced secure-wireless communication models. This approach has proven to have a number of advantages over the current bigger-box approach:

  • Capabilities are more narrowly defined and are often separated to allow more modular system options. The resulting capabilities are more loosely coupled providing for both stand-alone and connected operations. This helps in system scalability and reduces the impacts of failed components or communication availability. This reinforces black-box engineering techniques required to realize a fully scalable mobile capability.
  • Maintenance and services are simplified given the nature of plug-and-use implementation models.
  • Appliances focus on more discreet elements of a capability allowing developers to better define the experience for both trained and untrained operators. Consider the commercial TiVO digital recorder. This is a great example of an appliance that despite all its advanced options, is easy to implement and can be mastered by an untrained operator in very little time. This same model could lead to the implementation of new acoustic and signal processing appliances, system/mission status and alerting systems, aircrew kiosk services, briefing and collaboration support, and more.
  • Appliances can be shaped to meet the needs of the operating environment making them more mobile, rugged, compact, ergonomic, etc. Consider the emerging options for flexible touch screen-enabled liquid crystal displays and advances in embedded computing equipment. While still not currently widely available, these will be available for future systems. As an example, aircraft/vehicle maintenance crews could keep up-to-date on changing schedules in the field using secure-wireless links, have access to technical references, be tied in to alert and notification systems signaling arrival of parts and pending emergency landings. Aircrews can have ready access to mission briefing material, imagery, and more.

There are a number of encouraging trends within industry today that will further empower developers and engineers with the tools to rapidly host/re-host applications and capabilities across a scaled range of operating hardware to include traditional rich clients, enterprise servers, virtual environments, thin clients, tablets, embedded and PDA devices, and others. Major OS vendors are all working on solutions to further abstract hardware from software, increase binary compatibility across OS boundaries, and enhance runtime portability. When appliance models are coupled with human engineering practices, service-oriented and distributed processing architectures, natural interface devices, and presence and discovery technology it opens the door for a whole new generation of mission support solutions.


Mar 6 2006

An Intellectual Pursuit Continued with Human Factors Engineering

Category: Intellectual PursuitsJoeGeeky @ 05:41

This is a continuation of a previous post and focuses on one of six tenets identified in the previous post.  In this post I will focus on service orientation.

Second only to SOA, the most critical change in any future architecture is the application of Task-Centered Design (TCD), Human Systems Integration (HSI), and Human Factors Engineering (HFE). These are all disciplines that attempt to address what is known about human capabilities and limitations to the design of products, processes, systems, and work environments. It can be applied to the design of all systems having a human interface, including hardware and software. Its application to system design improves ease of use, system performance and reliability, and user satisfaction, while reducing operational errors, operator stress, training requirements, user fatigue, product liability, and others.

It is easy to see that systems are becoming infinitely more complicated when compared to earlier versions, without providing any substantial additional capability. At the same time, end-user confidence, training, and availability of corporate knowledge are on the decline. In part, this is a result of today’s systems engineering approaches and the assumptions behind the user’s behaviors in the field. One of the ways industry is attempting to address this paradigm is by measuring end-user experience economics.

When people make an investment, ideally there should be a return on that investment (ROI). This basic economic principle can and should be applied to the user experience. How? Once you understand a task and its anticipated user base, you can begin figuring out how to ensure that users achieve the most benefit by considering the frequency with which it is performed. Generally speaking, the more frequently a user performs a task, the less time they'll spend relearning that task the next time they perform it, thereby increasing the ROI. Conversely, the less frequently a user performs a task, the more time they'll spend relearning that task the next time the perform it, detracting from the potential ROI.

Other factors may also affect the potential ROI, including task complexity and user base experience. As task complexity increases, users may need to relearn more with each subsequent task execution, potentially detracting further from ROI. On the other hand, as expertise of the user base increases, with regard to general windowing and specific task skills, users may need to relearn less with each subsequent task execution, thereby potentially increasing the ROI further. Considerations such as these have not received the attention they deserve, and given current trends within many communities, addressing these factors will be critical as systems and applications continue to evolve.

Today’s application interfaces generally follow a deductive model, which is to say the user is often left to deduce what should be done next with little or no help. Assistance in these models is often in an external location and follow book model. Future architectures will need to take more instructive or inductive approaches. This type pf approach attempts to educate the user throughout the process, provides various options and recommendations, and provides a visually and graphically rich experience. Inductive approaches also attempt to target different users in different ways, based on their perceived experience levels and interfacing modes. For example:

Within Human System Engineering, there are generally four types of user experiences:

10-foot – These are circumstances where the user interfaces from a great distance such as 10 feet, hence the label ‘10-foot user experience’. In the last few years, these interfaces have become very popular in the systems community and are generally seen in appliances such as TiVO and Windows Media Center. The users are generally restricted by input and navigation devices (e.g. remote control), and more often then not, only want high-level functionality. This experience demands that the user not require training, find things easily with little or no input. Despite the distance, this approach is often used for kiosk’s applications. Within many communities this is ideal for personnel who need to access current data to support briefings, keep situational awareness, view imagery, and more. Air, vehicle, and maintenance crews would use such devices to monitor status, find parts, review schedules, etc. While not yet widely embraced in many communities, I believe this will be used much more widely in the future. This experience is commonly referred to as the lean-back experience. 

2-foot – This is the most common experience today. In this circumstance, the user is within reaching distance of the computer (e.g. laptop or desktop); the user has access to or control over all the peripheral devices and is generally more highly trained. This experience is targeted for users that require a great deal more functionality and as a result generally require more training. This experience is commonly referred to as the lean-forward experience.

1-foot – This experience is generally targeted towards the mobile PDA or Tablet experience. In this environment, the devices are usually very small but are often highly capable. The users generally require functionality that requires training, although not as much as the 2-foot user, making sacrifices in lieu of mobile flexibility. Screen real-estate on these devices is generally limited, although this experience is preferred for some specialized mobile appliances. This experience is commonly referred to as the lean-forward experience.

0-foot – This experience represents worn computing devices such as Heads-up displays and would not likely be practical in most systems. This experience is commonly referred to as the lean-forward experience.

Addressing human factors is not limited to those elements. When human factors engineering is applied to minimize the time and effort required to perform preventive and unscheduled maintenance, it is referred to as designing for system maintainability. Hardware accessibility is optimized for the most frequent maintenance tasks, removable components are designed for human lifting, and field service manuals are designed for ease of use. Field observation techniques can also be developed to ascertain the level of effort required to maintain existing systems and to identify opportunities for system improvement.

Knowledge of human perceptual systems aids in designing or selecting display techniques or technologies in system interfaces. Human factors engineering applies what is known about human cognitive and motor output characteristics to the design and selection of required responses and control technologies used in human-machine systems. In preventing mismatches, this approach improves the communication between the human and the system.

Usability testing is a technique used to quantitatively evaluate a given prototype design. It can range in rigor from conducting interviews and focus groups to detailed simulations using representative human subjects and system-related tasks. The metrics recorded during testing can be used to evaluate performance or to make comparisons among several candidate designs.

Estimates of the likelihood that a human error will occur in a human-machine system scenario are useful in both quantifying system reliability and in identifying better designs that reduce the potential for human errors.

Tags: ,

Mar 2 2006

An Intellectual Pursuit

Category: Intellectual PursuitsJoeGeeky @ 05:08

This one was a bit of a brain bender... In this project I didn't write any code at all. I was asked to try and define key developmental trends that might define the tenets for any number of new projects. In this great big world of ours there are a lot of opinions on this issue, so I read and I read, but in the end, I was left to come up with my own... Ohhh no! I had to think for myself and in this case I could not rely on the genius of Microsoft, MSDN, or any other resource I had come to rely on. Ok... Here it goes...

In order to understand long range technological impacts to the systems and products of the Future, we need to take a look at key industry and government transformations and architecture trends that are forming today. In the last decade, the requirements for applications and information interoperability, and more importantly, cross-domain information sharing have widened significantly. Technologically speaking, these changes have influenced a number of elements related to how these and related stakeholders are developing systems, applications, and information resources today. The following areas constitute some of the key transformational elements of modern solutions:

• Service-orientation (SOA)
• Human Factors Engineering
• Distributed, Parallel and Adaptive Information Processing
• Presence and Discovery Enabled Solutions
• Appliance-based Solutions
• Localization, Globalization, and Internationalization

As part of this endeavor I also put together an HCI Concepts brief. At a high level, the goal was to expand on the Human Factors Engineering material, with specific emphasis on graphical interfaces. As with most complex issues, the most important thing is to (a) define a common vernacular, (b) define measurements for success,  and (c) come up with a jingle... In this case "Sex Sells" and I don't mean porn.

Over the next few posts I will explore these ideas in more depth so stay tuned 

HCI Considerations.pdf (2.77 mb)

Tags: , , , , ,