Mar 15 2006

An Intellectual Pursuit Continued with Localization and Globalization

Category: Intellectual PursuitsJoeGeeky @ 17:34

As witnessed on almost every news channel today, the military’s mission objectives can shift radically from warfighting to civil support.  This community has a lot of really great examples of practical localization and globalization opportunities. In the classic military environment, everyone speaks English, understands the common military phonetic alphabet, and has their clocks set to Greenwich Mean Time (e.g. Zulu) using military time. Twenty years ago those assumptions general held up, although today, they cannot rely on any of these assumptions any longer. 

Today, commanders in the field have to be able to more quickly adapt their available assets to peace-driven, civil support and security objectives. In these roles, field commanders often support non-military domestic and international civil authorities. Now, military units have to speak in local terms to include time format, time zone, language, grammar, measurement conversion, currency, telephony requirements, and more. While this paradigm is not a new one, there architectures generally do not support these changes and present difficult problems for field users who have to toggle back-and-forth between the two.

Consider something as simple as a keyboard. In a multi-national environment how would foreign nationals interact with our keyboards? Internationally, keyboard layouts vary widely. From a technological stand-point, advances in keyboards (Ex. LCD keyboards) will offer cost-effective solutions. While not currently available to the public, this type of technology offers a lot of solutions from a human factors stand-point, but specifically helps address multi-cultural, regional, and language challenges faced in the field.

Issues such as these require consideration early in development and engineering processes. We are not proposing that all applications need to support every language, local, and/or culture right out of the box. However, the geopolitical and military support climates today; and for the foreseeable future; suggest that future capabilities need to be malleable enough so that we can localize/globalize them as needed.   

For developers and engineers, localizing a system is the process of extracting all the language-dependent content that are normally contained within compiled applications, and putting them in a separate location that can later be modified by a translator and system administrators. By itself, this type of change does not cost anything and is really a matter of discipline within the engineering process. However, if this type of issue is not identified early the cost to go back and make these types of changes can be quite high.

Tags:

Mar 13 2006

An Intellectual Pursuit Continued with Presence Awareness

Category: Intellectual PursuitsJoeGeeky @ 06:57

This is a continuation of a previous post and focuses on one of six tenets identified in the previous post.  In this post I will focus on service orientation.

If you use instant messaging (IM), you already know the difference between old-fashioned email and the power of instant messaging. Its's not the speed, it's the knowledge that before you send a message, you have a better sense of whether or not the person on the other end is available to receive it. This immediate knowledge can make instant messages succeed when email fails. For example... if you want to consult someone in another facility, sending an e-mail doesn’t give you any assurances that he/she is available to read your message. Telephoning is intrusive and in some environments may not be available. Walking around to locate the person can also be impractical, and may distract you from other tasks. There are few other mediums like this and the ability to determine presence will become a more powerful tool as these technologies develop.

Chances are this is already something that you know very well, as IM is a prevalent technology in the commercial sector. The concept of presence within our day-to-day activities provides us with a wide range of new possibilities. Presence enabled solutions tend to be more natural for end users, often requiring little or no interaction from the user to adopt and utilize. The use of such tools is often non-intrusive, provides the users with immediate feedback, and can be contextually defined to further enhance the end-users experience. In development terms, these types of solutions turn the classic Use Case on its head. For example:

Today’s services often require the user to drive functionality

Environments embracing presence and discovery enabled solutions have the Use Cases service the user without them having to request/start the action

These types of presence capabilities can be achieved using any number of identification solutions from system credentials, smart cards, biometric systems, wearable devices, Radio Frequency Identification (RFID) tags, embedded signaling equipment, etc, which can all be used to calculate presence in one form or another. These types of options are becoming more-and-more prevalent and cost effective. For example:

Presence enabled camera’s put users online when they sit in front of the system. These types of devices also support face-tracking and sound isolation which further enhance the collaboration experience for users.

RFID systems allow mobile system and inventory managers to find that elusive part, component, or other buried resource that may be packed away. Consider the mobile systems inventory. With these devices properly tagged, load out planning can be virtualized, capabilities & dependencies mapped, inventories streamlined, and more.

Today, manufacturers are working to embed high fidelity microphone arrays and speaker systems to enable voice-enabled and voice-printing applications, which will further widen presence enabled authentication, configuration, and more…. 

Presence enabled solutions are not just for hardware-based. Simple application enhancements using presence/discovery can empower the end-user and provide a more pleasing user experience. Applications can automatically discover and consume services, optional capability resources, participate in distributed processing pools, and more. However, even smaller presence-aware techniques can enhance the end-users experience.

In the systems of the Future, presence and discovery enabled solutions will play a key role in empowering users with the information and context relevant to a users defined duties, mission objectives, mission conditions, state-of-alert, environment, subscriptions, location, and more. Presence and discovery technologies will enable solution developers to empower applications developed for many solutions with the ability to dynamically transform the user experience, composition and usability of computing devices, applications and data. New cost-effective presence and discovery technologies are beginning to emerge and I expect this trend to continue in the out years.

Tags:

Mar 11 2006

An Intellectual Pursuit Continued with Appliance-based Solutions

Category: Intellectual PursuitsJoeGeeky @ 17:13

This is a continuation of a previous post and focuses on one of six tenets identified in the previous post.  In this post I will focus on service orientation.

Another trend expected to impact the systems of the Future, is the appliance approach to systems engineering. Today, capabilities are integrated into bigger and bigger machines, each with all the elements needed to operate on their own, such as monitors, keyboards, etc. This approach has resulted in the development of applications that are dependent upon an integrated architectural model Side effects caused by this approach have unwittingly led to a number of problems for many different types of users.

  • The creation of major points-of-failure within systems architectures
  • Herculean logistical efforts required to mobilize a capability
  • Overwhelming configuration and administration effort
  • Limited scalability within existing architectures
  • Tightly bound system-level dependencies
  • Limited ability to take only what is needed. No scales of economy with respect to establishing and later growing capabilities
  • Architectural limits for single system processors and memory constrain the potential of information processing efficiency

At some point the bigger-box approach begins to unravel, especially in mobile environments. In today’s littoral environment, users need the flexibility to mobilize a capability rapidly, with little logistical overhead, little support from expert users, and scale it as the situation allows or demands without carrying thousands of pounds of computing hardware.

Commercially, the industry has moved to appliance-based solutions and embraced secure-wireless communication models. This approach has proven to have a number of advantages over the current bigger-box approach:

  • Capabilities are more narrowly defined and are often separated to allow more modular system options. The resulting capabilities are more loosely coupled providing for both stand-alone and connected operations. This helps in system scalability and reduces the impacts of failed components or communication availability. This reinforces black-box engineering techniques required to realize a fully scalable mobile capability.
  • Maintenance and services are simplified given the nature of plug-and-use implementation models.
  • Appliances focus on more discreet elements of a capability allowing developers to better define the experience for both trained and untrained operators. Consider the commercial TiVO digital recorder. This is a great example of an appliance that despite all its advanced options, is easy to implement and can be mastered by an untrained operator in very little time. This same model could lead to the implementation of new acoustic and signal processing appliances, system/mission status and alerting systems, aircrew kiosk services, briefing and collaboration support, and more.
  • Appliances can be shaped to meet the needs of the operating environment making them more mobile, rugged, compact, ergonomic, etc. Consider the emerging options for flexible touch screen-enabled liquid crystal displays and advances in embedded computing equipment. While still not currently widely available, these will be available for future systems. As an example, aircraft/vehicle maintenance crews could keep up-to-date on changing schedules in the field using secure-wireless links, have access to technical references, be tied in to alert and notification systems signaling arrival of parts and pending emergency landings. Aircrews can have ready access to mission briefing material, imagery, and more.

There are a number of encouraging trends within industry today that will further empower developers and engineers with the tools to rapidly host/re-host applications and capabilities across a scaled range of operating hardware to include traditional rich clients, enterprise servers, virtual environments, thin clients, tablets, embedded and PDA devices, and others. Major OS vendors are all working on solutions to further abstract hardware from software, increase binary compatibility across OS boundaries, and enhance runtime portability. When appliance models are coupled with human engineering practices, service-oriented and distributed processing architectures, natural interface devices, and presence and discovery technology it opens the door for a whole new generation of mission support solutions.

Tags:

Mar 7 2006

An Intellectual Pursuit Continued with Distributed Processing

Category: Intellectual PursuitsJoeGeeky @ 17:51

This is a continuation of a previous post and focuses on one of six tenets identified in the previous post.  In this post I will focus on service orientation.

In today’s environment information can not be processed fast enough. After a decade of throwing bigger equipment at processing challenges, the community has still not come close to meeting the sixteen times real-time performance requirements established by the operational community. To reach threshold and keep up with the ever-increasing demands on complex information processing and analysis, a new approach is required. Distributed and parallel processing techniques offer a viable and scalable solution.

Over the last decade, organizations such as, SETI (the Search for Extraterrestrial Intelligence) and RSA data security have successfully used distributed computing to analyze data. This is accomplished by harnessing the idle CPU cycles of volunteer or subscriber computers/computing appliances distributed across the Internet which can process in excess of 100 million instructions per second. In the last 20 years, parallel processing, concurrent processing, and optimized distributed computing have realized vast savings in computational time for applications of wide variety. The problem in deploying parallel processing on the Web has been software limitations and reliance on unique hardware requirements leveraged by prevailing parallel programming techniques. For example, operating systems, such as Windows NT supported multiple processors, but desktop PC applications were slow to fully exploit its internal multi-threading capability.

Today, hardware innovations have led to high-speed switched interconnections within a wide range of computing devices. These interconnections have made distributed-memory massively parallel processing (MPP) appear to programmers like shared memory symmetric multiprocessing (SMP) machines. These hardware advances in addition to advances in Rapid Application Development (RAD) environments, have made the realization of this capability much easier. Forward looking approaches today, seek to leverage the rapid advances in processor design, Internet connectivity, and the implementation of distributed computing embodied in such languages as the .NET Framework and Java. In the systems of the Future, processing agents installed on available computing devices will allow processing of data such as acoustic signals and electronic emissions, more rapidly as available network assets increase. When combined with presence and discovery approaches discussed later in section, processing clients can automatically subscribe to process pools and provide more adaptive processing support.

Tags:

Mar 6 2006

An Intellectual Pursuit Continued with Human Factors Engineering

Category: Intellectual PursuitsJoeGeeky @ 05:41

This is a continuation of a previous post and focuses on one of six tenets identified in the previous post.  In this post I will focus on service orientation.

Second only to SOA, the most critical change in any future architecture is the application of Task-Centered Design (TCD), Human Systems Integration (HSI), and Human Factors Engineering (HFE). These are all disciplines that attempt to address what is known about human capabilities and limitations to the design of products, processes, systems, and work environments. It can be applied to the design of all systems having a human interface, including hardware and software. Its application to system design improves ease of use, system performance and reliability, and user satisfaction, while reducing operational errors, operator stress, training requirements, user fatigue, product liability, and others.

It is easy to see that systems are becoming infinitely more complicated when compared to earlier versions, without providing any substantial additional capability. At the same time, end-user confidence, training, and availability of corporate knowledge are on the decline. In part, this is a result of today’s systems engineering approaches and the assumptions behind the user’s behaviors in the field. One of the ways industry is attempting to address this paradigm is by measuring end-user experience economics.

When people make an investment, ideally there should be a return on that investment (ROI). This basic economic principle can and should be applied to the user experience. How? Once you understand a task and its anticipated user base, you can begin figuring out how to ensure that users achieve the most benefit by considering the frequency with which it is performed. Generally speaking, the more frequently a user performs a task, the less time they'll spend relearning that task the next time they perform it, thereby increasing the ROI. Conversely, the less frequently a user performs a task, the more time they'll spend relearning that task the next time the perform it, detracting from the potential ROI.

Other factors may also affect the potential ROI, including task complexity and user base experience. As task complexity increases, users may need to relearn more with each subsequent task execution, potentially detracting further from ROI. On the other hand, as expertise of the user base increases, with regard to general windowing and specific task skills, users may need to relearn less with each subsequent task execution, thereby potentially increasing the ROI further. Considerations such as these have not received the attention they deserve, and given current trends within many communities, addressing these factors will be critical as systems and applications continue to evolve.

Today’s application interfaces generally follow a deductive model, which is to say the user is often left to deduce what should be done next with little or no help. Assistance in these models is often in an external location and follow book model. Future architectures will need to take more instructive or inductive approaches. This type pf approach attempts to educate the user throughout the process, provides various options and recommendations, and provides a visually and graphically rich experience. Inductive approaches also attempt to target different users in different ways, based on their perceived experience levels and interfacing modes. For example:

Within Human System Engineering, there are generally four types of user experiences:

10-foot – These are circumstances where the user interfaces from a great distance such as 10 feet, hence the label ‘10-foot user experience’. In the last few years, these interfaces have become very popular in the systems community and are generally seen in appliances such as TiVO and Windows Media Center. The users are generally restricted by input and navigation devices (e.g. remote control), and more often then not, only want high-level functionality. This experience demands that the user not require training, find things easily with little or no input. Despite the distance, this approach is often used for kiosk’s applications. Within many communities this is ideal for personnel who need to access current data to support briefings, keep situational awareness, view imagery, and more. Air, vehicle, and maintenance crews would use such devices to monitor status, find parts, review schedules, etc. While not yet widely embraced in many communities, I believe this will be used much more widely in the future. This experience is commonly referred to as the lean-back experience. 

2-foot – This is the most common experience today. In this circumstance, the user is within reaching distance of the computer (e.g. laptop or desktop); the user has access to or control over all the peripheral devices and is generally more highly trained. This experience is targeted for users that require a great deal more functionality and as a result generally require more training. This experience is commonly referred to as the lean-forward experience.

1-foot – This experience is generally targeted towards the mobile PDA or Tablet experience. In this environment, the devices are usually very small but are often highly capable. The users generally require functionality that requires training, although not as much as the 2-foot user, making sacrifices in lieu of mobile flexibility. Screen real-estate on these devices is generally limited, although this experience is preferred for some specialized mobile appliances. This experience is commonly referred to as the lean-forward experience.

0-foot – This experience represents worn computing devices such as Heads-up displays and would not likely be practical in most systems. This experience is commonly referred to as the lean-forward experience.

Addressing human factors is not limited to those elements. When human factors engineering is applied to minimize the time and effort required to perform preventive and unscheduled maintenance, it is referred to as designing for system maintainability. Hardware accessibility is optimized for the most frequent maintenance tasks, removable components are designed for human lifting, and field service manuals are designed for ease of use. Field observation techniques can also be developed to ascertain the level of effort required to maintain existing systems and to identify opportunities for system improvement.

Knowledge of human perceptual systems aids in designing or selecting display techniques or technologies in system interfaces. Human factors engineering applies what is known about human cognitive and motor output characteristics to the design and selection of required responses and control technologies used in human-machine systems. In preventing mismatches, this approach improves the communication between the human and the system.

Usability testing is a technique used to quantitatively evaluate a given prototype design. It can range in rigor from conducting interviews and focus groups to detailed simulations using representative human subjects and system-related tasks. The metrics recorded during testing can be used to evaluate performance or to make comparisons among several candidate designs.

Estimates of the likelihood that a human error will occur in a human-machine system scenario are useful in both quantifying system reliability and in identifying better designs that reduce the potential for human errors.

Tags: ,

Mar 4 2006

An Intellectual Pursuit Continued with SOA

Category: Intellectual PursuitsJoeGeeky @ 12:20

This is a continuation of a previous post and focuses on one of six tenets identified in the previous post.  In this post I will focus on service orientation.

The single most important advancement that will, by itself, define the future of modern systems and applications is the adoption of net-centricity using service-oriented approaches. In the last five years, “Network-centricity” has led the way to Net-centricity. That is to say, prior to the last five years, the emphasis has been on information sharing within a particular domain—whether via IP networks or otherwise. The key difference between the two is the service-oriented approach, which will most likely become the foundation for the bulk of future capabilities and information resources ranging from simple weather information to raw sensor information from a wide range of industries. While products within todays environment have relied heavily on network-centricity for many years, the information they provided has not, for many reasons, been consumable by many third-party applications. Service-oriented approaches espouse more discrete interfaces ranging from raw data access to interfaces optimized for specific application requirements. This approach focuses on information sharing across domains through open standards, and not through defined relationships between systems or programs of record using a mix of standards and proprietary protocols. Modern initiatives are helping to flush out the requirements, standards, specifications, and best practices for service-oriented environments.

A service-oriented architecture is comprised of a number of different components (known as services) that can be consumed by any number of client products. Service-oriented architectures (SOA) define specific dependencies and produce artifacts that can be molded to meet wider service needs then seen in most applications today, whether they are available on the network as a service or otherwise. Within SOA there are provisions for policy controls, information contracts, and the more technical data definitions, state management, and more. Service-orientation can be implemented with a wide range of technologies and formats as needed to meet performance, size, and/or openness requirements.

Click for a larger image

With this foundation established it opens the door to a wider range of information leveraging and capability composition options. Given this model, applications can become more malleable so they can change to meet the demands of the operating environment, establish new data partnerships & communication paths, and survive incremental service changes without risking the total capability set.

Within modern/future, applications would be modeled to publish discrete data and applications (whether application or information services) and composed to create capabilities as needed.

Tags:

Mar 2 2006

An Intellectual Pursuit

Category: Intellectual PursuitsJoeGeeky @ 05:08

This one was a bit of a brain bender... In this project I didn't write any code at all. I was asked to try and define key developmental trends that might define the tenets for any number of new projects. In this great big world of ours there are a lot of opinions on this issue, so I read and I read, but in the end, I was left to come up with my own... Ohhh no! I had to think for myself and in this case I could not rely on the genius of Microsoft, MSDN, or any other resource I had come to rely on. Ok... Here it goes...

In order to understand long range technological impacts to the systems and products of the Future, we need to take a look at key industry and government transformations and architecture trends that are forming today. In the last decade, the requirements for applications and information interoperability, and more importantly, cross-domain information sharing have widened significantly. Technologically speaking, these changes have influenced a number of elements related to how these and related stakeholders are developing systems, applications, and information resources today. The following areas constitute some of the key transformational elements of modern solutions:

• Service-orientation (SOA)
• Human Factors Engineering
• Distributed, Parallel and Adaptive Information Processing
• Presence and Discovery Enabled Solutions
• Appliance-based Solutions
• Localization, Globalization, and Internationalization

As part of this endeavor I also put together an HCI Concepts brief. At a high level, the goal was to expand on the Human Factors Engineering material, with specific emphasis on graphical interfaces. As with most complex issues, the most important thing is to (a) define a common vernacular, (b) define measurements for success,  and (c) come up with a jingle... In this case "Sex Sells" and I don't mean porn.

Over the next few posts I will explore these ideas in more depth so stay tuned 

HCI Considerations.pdf (2.77 mb)

Tags: , , , , ,