Tuesday, March 22, 2011

The US Market for Self-paced eLearning Products and Services: 2010-2015 Forecast and Analysis






The US market for Self-paced eLearning products and services reached $18.2 billion in 2010. The demand is growing by a five-year compound annual growth rate (CAGR) of 5.9% and revenues will reach $24.2 billion by 2015.

This report forecasts five-year online learning expenditures by eight buyer segments: consumer, corporate, federal government, state and local government, PreK-12 academic, higher education, non-profits and associations, and healthcare.

The five-year compound annual growth rate (CAGR) growth rate for Self-paced eLearning across all eight of the buyer segments is 5.9%, but growth is much higher in specific segments. For example, growth rates in the PreK-12, healthcare, and association segments are 16.8%, 16.3%, and 14.3%, respectively.

The rate of growth in the PreK-12 segment is due to the relentless migration to online content formats, and also due to the proliferation and success of for-profit online schools. Yet, buying behavior is erratic as schools struggle with budget cuts.

The rapid growth of virtual schools, the dramatic increase in online students, the recession, and state budget cuts are acting as iterative catalysts for Self-paced eLearning in the PreK-12 segment.

For example, budget cuts have prompted schools to reduce spending on summer school and classroom-based credit-recovery (making up for a failing grade) programs and increase spending on self-paced products and services. It is now more cost efficient to outsource credit-recovery programs to commercial online providers.

The primary catalyst driving the strong virtual school growth in the US is the economy. State-run virtual schools used to target courses that were not offered in local districts or not available to rural students. Now, as a way to cut costs, they are targeting core curriculum and supplemental as well.

The explosive growth of online enrollments in both academic segments in the US has created a boom market for Self-paced eLearning products in the PreK-12 and higher education segments.
The healthcare segment has been immune to the recession. Since the recession began, the healthcare segment has added over 866,000 jobs. According to a May 2010 report by the US Bureau of Labor Statistics (BLS), the healthcare segment has been adding an average of 19,700 jobs a month over the last two years.

Obviously, there is a strong demand for training and education in the healthcare segment. A major challenge for suppliers competing in the healthcare segment is identifying the buyers. This report describes the buying behavior in this complex segment.

Associations spend over $6.2 billion annually on educational events and until recently, most of those events were in physical venues. Associations were once slow adopters of learning technology in general, but this is no longer true. This segment is moving fast to Self-paced eLearning. The current forecast has been revised significantly upward from previous forecasts.
In the past three years, across the entire market, the demand for Self-paced eLearning has slowed. This general slowing is due to three market factors:

  • Commoditization of platforms and tools
  • Pricing pressures in the corporate segment caused by the slow economic recovery
  • The growing tendency for buyers to purchase other types of learning technology products.
Commoditization (for any product) occurs when demand is very high and competing products lack significant differentiation in the perception of customers. Customers shop for price. Learning platforms and authoring tools are now highly commoditized, particularly in the corporate segment.

Although the overall corporate growth rates are flat, demand is still quite strong, and the revenues are very high. The corporate market was an early adopter and companies continue to purchase Self-paced eLearning products. The corporate segment still represents the best revenue opportunities for suppliers.



There is now clear evidence that other learning product types such as Mobile Learning and Social Learning are cannibalizing Self-paced eLearning revenues. This is particularly prevalent in the consumer and higher education segments.

In the consumer segment, the growth rate for Self-paced eLearning content is now flat-to-negative at -1.9%, yet the consumer growth rate for Mobile Learning content is a healthy 18.3%. Social-based language learning sites are now very popular in the consumer segment as well.
The "online population" in the higher education segment is growing at a rapid rate. Lecture Capture Systems are now in high demand in the higher education segment and are dampening the growth of Self-paced eLearning.

This is called "product substitution" in a market and can be a significant threat to suppliers. Recommendations on how to deflect this threat are included in the final section of this report.


Wednesday, December 31, 2008

Road to the Oscars

Road to the Oscars

Vanitha Rangaraju-Ramanan talks about her long journey from Trichy to Hollywood to win the coveted award.



Every March, Indians are in ecstasy if an Indian movie is just nominated for the Oscars in the Best Foreign Language Film category. So far, the Oscar has eluded the Indian film industry with the exception of Satyajit Ray who received the coveted statuette for lifetime achievement. However, very few would know that Vanitha Rangaraju-Ramanan — an Indian woman from Trichy, Tamil Nadu — has actually won an Oscar.
Rangaraju-Ramanan won it for her technical work in the animation movie Shrek in 2002. It may become a double Oscar if the sequel, Shrek 3 wins in 2008.
Excerpts from an interview...


Tell us about yourself and your family.VR: Trichy is my hometown. I was born and brought up there. I did my B.Arch. at the Regional Engineering College, Trichy. I worked in Bangalore where I met Ramanan whom I married.
I live in California with my husband and seven-month-old daughter, Ananya. My folks come from a large family, with my dad having six brothers and three sisters. It's always great fun with the whole gang getting together during summer vacations. I've one sister, who lives in India.


How did you become interested in animation?I was working in Bangalore when I saw a TV interview immediately after Toy Story (1995) — the first full-length 3D Computer Graphics (CG) feature film — had been released. It was fascinating and they talked about how people from different fields contributed to the movie's creation. I've always loved animation, and that interview got me seriously thinking about entering the field. Therefore, I left India to do my master's degree, majoring in computation and simulation at the University of Texas, Austin.
How did you get into Pacific Data Imaging (PDI)?I got an internship during the last semester (autumn 1998) at Industrial Light and Magic, the leading visual effects studio in California. That was a big break. After I completed my internship, I immediately got a job as Lighting Technical Director at PDI (now PDI/DreamWorks) to work on Shrek. It was April 1999 and the production on Shrek had just started.
What exactly does a lighting director do in the movie? How is it different from, say, a graphics director?In addition to digitally lighting the film, the Lighting Department is responsible for bringing the many different components of a shot together — complex geometry, motion of the characters, textures, effects such as fire and dust, and the matte paintings. Technical directors are people who help make this happen, with both their artistic and technical abilities.
What was your reaction when you heard that Shrek had won the Oscar for technical work?Shrek actually won the Oscar as the Best Animated Feature Film. The technical achievement, which translated into making Shrek a visual success also helped tremendously in getting the award. To answer the question, it felt great... almost unreal. I'm so happy to have been a part of this great team. It is so hard to believe.
I still remember watching the Oscars in India, wondering how it would be to touch the statuette. And I actually got to hold it when our producer Aron Warner returned with it to PDI. It was a wonderful feeling.


Tell us how it felt working on 'Shrek' 3? How was it different from 'Shrek'? Was it easier or tougher?It has been six years since the first Shrek movie released and many hardware and software advances have been made in the field of computer graphics.
As an industry leader in animated films, PDI always strives to keep ahead of the rest. For Shrek 3, we pushed the visual complexity of the film even further than the previous two instalments, including realistic hair, clothing, crowd, lighting, etc. Do you plan to get into other areas of animation?I've been involved in different capacities in each of the Shrek movies. I worked in the capacity of a Lighting Technical Director in Shrek, a Lighting Lead in Shrek 2 and now for Shrek 3 I am the Crowd Lead. So yes, I do look for new challenges in each. Please tell us about your future projects.I'm currently working on Madagascar: The Crate Escape.
What would you say to young hopefuls, especially school students, who want to get into animation? The field of feature animation is extremely competitive. Not only do you need the right qualifications, you also need the right attitude, and most of all you need commitment to pursue a dream. Animation is where people from many different fields work together, bringing different talents to the table. Therefore, whether you are an engineer, photographer, painter, programmer or architect, everyone has the ability to make a huge contribution to the project.
So learn the things you are learning well. Nowadays, many schools offer courses specifically designed towards computer animation. There are many degree programmes as well. A good combination of strong foundation skills with relevant education guarantees a great start in the field of computer animation. Of course, nothing is more valuable than having some experience working in a CG company, so it would be good for students to look for internship opportunities there. Do you think there will soon come a day in movies when actors will be completely replaced by graphic characters that look like real people?No. It is a totally different medium and each has its way of telling a story. They aren't mutually exclusive, so you will continue to have both live action features with real actors and animated movies with CG characters. Human actors will always be around.
In your opinion, how long do you think it will take the animation industry in India to make a mark on the global scene? Who do you see in the future as the other major players apart from the US?Well, we are already seeing some studios doing a lot of animation work in India and they are doing a rather good job as well. What you need is also a sufficient talent pool to support the industry, which is still in the nascent stages. France and many other European countries have always had a focussed big on animation as has Japan and Korea.


Courtesy: Gulf News Report

Wednesday, August 06, 2008

eLearning 2.0


eLearning 2.0 - Karrer - ASTD OC 2007

From: akarrer, 10 months ago





Presentation on eLearning 2.0.


SlideShare Link

Wednesday, February 27, 2008

Mobile Learning versus E-Learning - Is There a Difference?

As the potential for technology to enhance learning grows, we often see the phrase mobile learning bandied about. Clearly, the term appears vague as the concept emerges, yet it does call to mind a simple question: How does mobile learning differ from online or distance learning options?
In order to understand the term let’s review the concept as it exists by using the definition supplied from a rather technical article, “Defining, Discussing, and Evaluating Mobile Learning” on the website irrodl.org.
Strictly from a technological standpoint, the term is used for learning that can be delivered and supported entirely by mobile technology. Therefore, among the most common options that could be used for mobile learning would be PDA’s, smartphones and of course, a wireless laptop.
But that begs the question, how does mobile learning differ from other forms of education? Is it really different than e-Learning?
To discuss those options we can immediately begin with the intent of the user. With e-learning, there is a specific intent to learn something - in fact the selection of e-learning is generally based on a desire to acquire a specific set of knowledge or skills. For e-learning we generally add some phrases like tethered (connected to something) as well as learning that is offered in a formal and structured manner.
For mobile learning, the first major difference is that it is un-tethered. It also is defined by learning that is more informal and opportunistic. We can run with that thought and add descriptors like private, situational, and unstructured.
With such thoughts one can clearly see an enormous distinction between e-learning and mobile learning. Most importantly, mobile learning has the potential for even greater impacts than e-learning.
For example, one major change in the idea of learning is that teachers used to deliver some material, or knowledge, with the idea that the student learns the concept “just-in-case.” In fact, most of education is traditionally offered in such a format.
The latest in technology means that a brand new focus is possible, that learning can be delivered “just-in-time.” With the concept of student ownership critical to learning processes, we can see that the latter option should be far superior when working with a classroom full of students. Because not only can the learning be provided “just-in-time,” it can be provided “just-enough” or even “just-for me.”
As a former teacher, I can quickly discern one critical question emerging from the mobile learning format. In such a situation, how does a teacher ensure that the learner retains the knowledge just utilized?
For most educators, a failure to provide an answer to that question will deter them from ever utilizing the mobile learning format.

Friday, January 04, 2008

Usability and Interface Design in eLearning

Usability and interface design, as blanket terms, need to be defined before exploring their meanings in the eLearning context. Usability is a term used to denote the ease with which people can employ a particular tool in order to achieve a particular goal. The extent to which an object is easily usable by the users determines the usability of that object.

Blanket definition of usability and interface design
Usability and interface design, as blanket terms, need to be defined before exploring their meanings in the eLearning context. Usability is a term used to denote the ease with which people can employ a particular tool in order to achieve a particular goal. The extent to which an object is easily usable by the users determines the usability of that object. If a user can easily speak to somebody or send and receive text messages on a cell phone, it can be termed as fairly usable. The core is, as long as a user can employ a tool to perform its defined task without being stuck in learning how to use the object, it is usable. There are varying degrees of usability for different tools.

Any object becomes usable because of its interface. Interface can be defined as the medium through which a user interacts with or uses an object to perform particular tasks. Thus, the user interface of a cell phone comprises the keys and the display screen. The design of this interface is what determines its usability. If the interface design is complex, and does not convey the exact function of each of its element, it can scare off the user even before he actually starts using the object. Hence, it is important for designers to keep the interface design succinct in order to increase the usability of the object.

How do usability and interface design affect eLearning
E-learning, is short for electronic learning, or learning through the combination of various electronic media like the computer, web and other technologies. Obviously, it requires an interface between the learner and the technology, to facilitate learning. The usability of any e learning initiative can be determined by the ease with which learners can learn their chosen subject without being lost in the rigmarole of learning how to use the technology. And this is determined by the interface design of the e learning process.

A badly designed user interface adversely affects the usability of an eLearning program. This drives away learners from taking up or completing the course, even before they can decide whether the course suits their needs. Little wonder then, that many students prefer the more conventional classroom based approach to learning. The usability and interface design of an eLearning course, therefore, can make or mar its success.



The neglect of usability and interface design in eLearning
While much research has gone into instructional designs of e-learning, the interaction between the user and the medium of delivery has not been thought about as much. Being the most important aspect in the success of an e learning initiative has not stopped it from being the most neglected one. This, among other snags, is responsible, to a large extent, for the high drop out rates of eLearning programs. An interface that leaves much to be desired makes the students weary of the whole process of learning and kills their motivation. Thus, for an e learning initiative to be successful, its usability and a well designed interface matter the most.

Key to a better interface design in eLearning
After recognizing the importance of a usable interface design in e learning initiatives, the next step is to focus on building a better user interface. The basic user interface encompasses

Orientation
This tells the learners what part of the course is being accessed by them, and where do they stand within the course. An exact orientation allows the learner an overview of how much has been learnt, and how much still remains. This keeps his motivation in the program intact, if not stimulate it.

Navigation
This comprises of keys and links that allow the learner to access relevant information with ease. It forms an important part of the user interface design because it represents the dexterity with which a user can traverse through the sea of information contained in the course material.

Metaphors
Creating metaphors consists of deciding the premise of the entire program. The theme can be chosen depending upon what type of course it supports. Like an educative program may use different backgrounds, names of the elements of menu, and colors, than that of a program aimed at say conducting a seminar or training for using a particular appliance. A consistent feel throughout the course will help in kindling the curiosity of learners.

Usability testing
Testing the designed user interface on the end users makes it more user-friendly. Any suggestions by learners can be incorporated and areas that learners find difficult to use can be corrected. No interface design should be finalized without testing it on the target audience to gauge its actual usability. Since all templates of your e learning course depend upon the user interface design, the best time to finalize upon the design is after it is usability tested.


Wrap up
Poorly designed user interface and low usability of an e-learning initiative cannot be solely blamed for its failure. Having said this, it remains equally true that low usability and a defectively designed user interface together account for a high number of learners dropping out of courses, since an interface is the starting point of the learner’s interaction with the courseware. A good user interface design will improve the usability of a program, and, in turn, encourage and motivate learners to stick to the program till its completion.

Friday, August 24, 2007

Standards in e-Learning: Why do we need them?


Standards are an integral part in everyday life. We take things for granted like the stop lights, clocks (analog and digital), electrical plugs that fit into sockets (with in country/continent), light bulbs, Railroad tracks, Internet, etc. According to International Organization for Standardization (ISO), standards are “documented agreements containing technical specifications or other precise criteria to be used consistently as rules, guidelines, or definitions of characteristics, to ensure that materials, products, processes and services are fit for their purpose.” There are two types of standards: a de jure Standard and a de facto Standard. De jure standards are those ratified by recognized international standards bodies such as the ISO and IEEE. De facto standards are those used by the vast majority of the market, but which aren’t necessarily open or based on any de jure standards. For example, Adobe Photoshop is considered the de facto standard for image editing, because it was chosen by the people who use it.


Simply put, de jure and de facto are used instead of "in principle" and "in practice," respectively. Typically, the process of standardization starts with a problem, followed by efforts to resolve such problems, then passing through a number of stages resulting into specifications. Ultimately, these specifications lead to the publishing of an accredited standard. A specification can be defined as a documented description. Some “specs” become a standard, which means they have received the stamp of accreditation by an authorizing body like IEEE, ISO.


SCORM, which stands for Sharable Content Object Reference Model, is one such standard for e-Learning. SCORM came out of Advanced Distributed Learning, an initiative which was formed as a developer and implementer of learning technologies across the Department of Defense. It is a collection of standards and specifications adapted from multiple sources to provide a comprehensive suite of e-learning capabilities that enable interoperability, accessibility and reusability of Web-based learning content. ADL’s vision is to “provide access to the highest quality education, training and performance aiding tailored to individual needs, delivered cost effectively, anytime and anywhere.”


Some of the issues we had before SCORM in e-Learning were:
· We couldn’t move a course from one Learning Management System to another.
· We couldn’t reuse content pieces across different Learning Management Systems.
· We couldn’t sequence reusable content for branching, remediation and other tailored
learning strategies.
· We couldn’t create searchable learning object libraries or media repositories across
different LMS environments.


SCORM addresses these issues and fulfils ADL’s vision by fostering creation of reusable learning content as “instructional objects” within a common technical framework. It describes that technical framework by providing a harmonized set of guidelines, specifications and standards and uses web as its primary medium of instruction. SCORM is built on the proven work of prominent organizations. It provides a reference model to accelerate standards development and is the first step to the path to defining a true learning architecture.



· Do you need to control learner access to courseware, track learner progress, or monitor
the effectiveness of your e-learning content?
· Do you want to be able to control the learner’s path through the content in some way?
· Do you plan to develop content in house and also purchase content from one or more
third-party content vendors?
· Do you plan to use the content for multiple new audiences in the future?
· Do you plan to reuse parts of the content in future courses?
· Are you planning to redistribute or sell the content to another organization?



If your answer to one or more of the above questions is “yes,” then you need standards such as SCORM.


We will know we are successful when these standards become transparent, the “e” is no longer needed in e-Learning and e-government and when sharing and reusing of become common place.


Wednesday, March 14, 2007

General architecture for a SCORM 2004 LMS implementationPart

Ostyn Consulting Resources
General architecture for a SCORM 2004 LMS implementationPart 1
Claude OstynVersion 1.0.2 Copyright © 2005, 2006 Claude OstynThis work is licensed under a Creative Commons Attribution-ShareAlike2.5 License
This are working notes for part 1 of a multipart document. The other parts are not publicly available yet. The plan for the other parts includes topics such as user interface considerations, client/server implementation considerations, use of XML, and auditing.

Part 1: Functional overview

Functional components and services
There are many ways to implement a SCORM 2004 conformant LMS. However a fully functional LMS includes a number of features that are beyond the scope of SCORM. A useful way to manage this complexity is to look at a practical LMS implementation as consisting of several main functional components. The components are not listed in any particular order:
Content repository
Delivery management system
SCORM 2004 conformant delivery system (a.k.a. Runtime Environment)
Historical tracking information repository
Learner administration
Objective tracking (a.k.a. Lightweight Competency Management)
General administration (including reporting and notifications)
User portal (what a user sees when not actually experiencing a SCORM package)
The functional components may be implemented as cooperating services, or in a monolithic implementation. A services based implementation allows the ability to update the components independently. Also, architecting as a services-based implementation facilitates the creation of integration interfaces, e.g. to integrate the learner administration component with an HR system.

Other services and enterprise integration
Obviously learning and training is not limited to SCORM content. A practical LMS will often need to include services such as:
Scheduling of offline learning activities, such as classroom instruction
Scheduling of facilities and resources such as physical or virtual classrooms and equipment or software tools
Gradebooks or similar means of tracking learner information in instructor-led learning contexts
Online virtual classrooms
Various synchronous or asynchronous collaborative learning tools, such as chat rooms, shared repositories for work products and resources used in group learning, wikis, help desks for expert advice on specific topics, etc.
Performance support, including reference libraries and help desks
Digital reference libraries for research or exploration, not necessarily tied to specific performance goals or targets
Collaborative learning facilities, such as multiuser simulation or game environments.
Personal portfolios
Also, increasingly, enterprises are looking at learning management as only part of a larger enterprise context in which human performance is managed to align with business goals and priorities. So, the LMS may be integrated with a competency management system that may include competency models, assessment processes and workflows, individual and group competency records, and various management tools to create, manage and mine competency information and facilitate alignment with business drivers.
Educational institutions have their own requirements. Some functional features that mesh with academic traditions, priorities and methods can be significantly different from those in a typical enterprise.
This document describes only the basic LMS functionality that is assumed to surround the management, delivery and tracking of SCORM based content. The other services and integration requirements are best described in separate documents with a broader scope.

Content repository
This is where SCORM packages are imported and stored for delivery. The basic repository features include:
Ingest process: Import, unpack and validate a SCORM 2004 package and store in a virtual or physical directory structure.
Maintain a catalog of imported packages. The catalog allows listing and searching. The catalog may use a subset of LOM metadata.
Extract metadata upon import and add to repository catalog, allowing minimal review/editing of metadata and completion of missing metadata.
Make a package available to the delivery management system (design to determine whether this is either in native form, or in a form pre-massaged for ease of delivery)
Maintain access permissions and provides access only as authorized. This supports workflow, in which a package that is being imported is not available until import is completed and the catalog entry is validated by an administrator. It may also support segmentation of the repository (e.g. different users have access to different types of content, or a content area may be restricted to users with certain privilege)

Support for remote content
SCORM content packages delivered by the LMS may reference learning objects that reside on remote servers, or the LMS might launch content packages that reside on remote servers. Because of the security constraints imposed by Web browsers to prevent cross-site scripting exploits, such remote content must appear to be coming from the same server of the LMS. This typically requires some infrastructure level implementation to allow the LMS server and remote content server to appear to be the same host.

Delivery System (a.k.a. Runtime Environment)
Figure 1: Schematic view of a SCORM runtime environment
This system provides the learner experience for a SCORM 2004 package. It manages the sequencing of the package. As a learner attempts to use the package successfully, this system manages the attempt. It also collects and maintain tracking data until the attempt is completed. The Delivery System is split into a server-side component and a client-side components.

Server side component
Delivers a single attempt on an activity tree on behalf of the Delivery Management System. The attempt may be suspended and resumed in a later learner session.
Instantiates the client side component (frameset and the original content of the frameset) for delivery in the client side browser
Receives and responds to communications from the client side component.
Maintains state for the activity tree and sequencing state.
Maintains state for each activity, including communication data model data that persist between sessions
Requests persistent storage of suspend data as needed from the Delivery Management System.
Offers system global objective status data to the Objective Tracking system, and gets global objective status data from the Objective Tracking system.
Offers historical tracking data to the Delivery Management System on an ongoing basis, or at least before discarding the data at the end of an attempt on a SCORM package activity tree or sub-activity.

Client side component
Typically consists of a persistent frameset that displays the runtime environment user interface components as well as the sequenced SCOs or Assets in a "stage" frame.
Includes the ability to send and receive data from the server side component without affecting the frameset itself or the stage frame. Some of the data may be fairly large (communication data model instances and/or runtime environment user interface data, such as the user's view of the activity tree, updated according to sequencing rules)
Manages the communication session for each SCO that is being launched.
Implements an API object that responds synchronously to the API calls from the SCO.
Sends data reliably to the server side content in response to API "Commit" calls that may come in rapid succession or be widely spaced.
Implements minimal user interface components as listed in SCORM 2004 S&N book.
Implements a way for the user to inspect and navigate through a "tree of activities" that is updated dynamically to reflect the current state of allowed or visible activities, and which allows the user to choose activities randomly when allowed by the sequencing rules.
Manages display of interstitial state content "between SCOs" as necessary; for example, prompts the user for choices when the sequencer encounters a choice situation with no default flow control mode to guide the choice toward a specific activity.
Attempts to manage gross user errors such as attempts to close the browser before server side data has been committed.

Delivery management system
The delivery management system keeps track what is being delivered and for whom, and manages the persistent state of SCORM data between user sessions and user attempts.
Provides minimal management of user "attempt registration" status, e.g. maintains relevant state info as long as a user is "registered" for an attempt on a SCORM package.
Prevents multiple concurrent registrations by the same user for the same package.
Negotiates with repository for access to the content package to be delivered by the Delivery System.
Instantiates the Delivery System when the user is ready to experience a SCORM package.
Prevents multiple concurrent instances of the Delivery System for the same user.
Maintains state data on current attempt, including maintenance of the temporary persistent storage for the complete suspended state of an activity tree and its sub-activities.
Offers historical tracking data to the Historical Tracking Information Repository on an ongoing basis, or at least before discarding the data at the end of an attempt on a SCORM package activity tree or sub-activity.

Historical tracking information repository
The historical tracking information repository maintains historical records of past attempts to use SCORM 2004 packages. It may also maintain current records that are subject to update until "finalized". Note that the SCORM does not define any requirements for the management of historical records, except for the status of certain objectives declared in content packages (see below).
Provides structured access to viewing or reporting services that are used to review learner and/or package activity records.
Accepts records of SCORM Activity tree sequencing status data model and individual activity status data model (IEEE 1484.11.1 data instances)
Manages storage and indexing of the data records (keyed to learners IDs, package IDs and attempt number)
Manages aging and archiving of aged data (storage space management, hierarchical storage, etc.)

Learner administration
The learner administration scope of complexity may vary considerably, depending on the level of integration with enterprise systems. The SCORM does not define any requirements for learner administration, but obviously this is a key component of a practical learning management system. At a minimum, the learner administration provides services such as:
Provides a means to register users or relate to existing user data.
Provides a means for learners to self-register for learning activities that use SCORM packages, if allowed by enterprise policy.
Provides a means for managers of the learning process to assign learning activities that use SCORM packages to users and to set constraints on those learning activities, such as number of attempts allowed or deadlines.
Provides a means for learners and managers to view the status of assignments and review tracking data and summary results.
If groups are supported by the LMS, provide similar services for a group as for an individual user, as well as administration of group membership.

Objective tracking (a.k.a. Lightweight Competency Management)
This is a lightweight system that manages the status of objectives associated with a learner. SCORM 2004 has a concept of "system global objectives" for which the status is maintained across attempts and across SCORM 2004 packages. This system may be built into the Delivery Management System or be a standalone component, or a service provided by a real competency management system.
This system:
Stores objective status for users in the form of "competency records"
Provides the Delivery System with current objective status data, if available.
Receives updated objective status data in the form of "evidence records" with the source of the data change (e.g. SCORM package, attempt number, timestamp) and updates the current competency records or propagates the information to "official" records according to administrative policy. For example, if policy is that an employee's competency records cannot be directly modified by a SCORM course without review or vetting, the status that is persisted for SCORM does not automatically propagate to the official competency records but a change of status may result in a workflow event to request review and update the official competency record.

General administration
There are three major components of General Administration:
General glue
Authentication and Authorization
Reports

General glue
This is the "glue" system that a system integrator uses to tie together the different components and services of the LMS.

Authentication and Authorization
This may include user authentication and single sign-on management, role management (e.g. unrestricted administrator, learner, repository administrator, learning management administrator, etc.) and role associations (e.g. managers can only view reports on the employees they supervise). This system can be arbitrarily complex, and when the LMS is a good candidate for integration with existing enterprise administrative management systems (LDAP, Active Directory, etc.). A basic implementation might implement a very minimal rights and authentication system with a very small set of roles, such as: Administrator and Learner.

Reports
Management wants reports. Reports are typically required to monitor whether the enterprise goals are being met by the LMS, and to identify problems that may occur in the system or with particular users of the system. Extracting and massaging usage, status and tracking data requires the design of reports, or at least of some generic query interfaces or guidelines.

Notifications
A practical system often includes some kind of notification mechanism, typically using email. The SCORM does not define any requirements for notifications. A well designed notification system removes the need for concerned users to log into the LMS to receive notices that concern them. A notification system might provide periodic summaries for administrators or faculty, alerts sent to learners when a new course is being offered, alerts sent to learners when a deadline approaches to register for a course or to complete a course, confirmation of successful completion sent to a learner, or alerts sent to LMS administrators when abnormal conditions are detected, such as security alerts, storage limit alerts, or abnormal usage patterns.

Security and privacy
Beyond authentication and authorization as described above, a practical system typically includes security features, such as encryption of sensitive data, logging of events with security implications, administrative oversight procedures, and user interfaces provided to learners and managers only through a secure SSL session in approved browsers. Most LMS are operating in a context where new requirements for privacy and auditability may be imposed by law.

User portal
This is the set of services and/or components of user interface that allow a user to interact with the LMS. The LMS might have its own portal, or might be integrated in an enterprise knowledge portal or workflow system. For example, direct access and launching of SCORM packages might be taking place in an embedded performance system, where the entire LMS only appears as a link to a tutorial embedded in the reference that pops up when the user requests help on a task. In an enterprise portal, the LMS is typically embedded among other enterprise specific offerings available to authorized enterprise users.
More Ostyn Consulting Resources