jump to navigation

How does GPS work? July 6, 2011

Posted by HubTechInsider in Definitions, Hardware, Microprocessors, Technology.
Tags: , , , , , ,
1 comment so far
Artist Interpretation of GPS satellite, image ...

Image via Wikipedia

 

How GPS works

Well, dear reader, here we both are again: time for another Hub Tech Insider primer on one of the most fundamental technologies driving today’s revolutions in mobile commerce, location-based services and applications, smartphone features, and so much more.

You know that I’m talking about GPS, the Global Positioning System.

You also realize, if you have been to this blog before, that you are about to get a complete rundown on the nuts-and-bolts of GPS technology from its nascent conceptual underpinnings through today’s location-aware mobile applications.

But first, a history lesson (you should have been expecting that too from me, Dear Reader):

The challenge of navigation: a brief history

The magnetic compass arrived in Western Europe from China sometime in the 12th century. The compass was a seminal invention in the history of navigation, providing orientation – however most travelers still depended on familiarity with the region through which they were traveling, using sight navigation perhaps supplemented with some rudimentary observation of the stars.

The magnetic compass of course could not determine a person’s position. The use of the stars was the primary method used to determine position. Devices such as the astrolabe, the quadrant and the sextant provided navigators with important new sources of navigational and positioning data, and they opened up new territories to exploration, as they enabled the traveler to easily determine latitude, which is the distance above or below the Earth’s equator.

Being an experienced sailor, and also as a result of my time involved in a naval military career, I have received training with the above three navigation instruments. Using a sextant to navigate can be fun, but the learning curve can be quite steep, particularly for beginners. And there is also a major problem with using these instruments to determine a precise location: when navigating using the stars as a visual point of reference, there is no way to determine your longitude, which is the distance east or west around the globe.

Accurate latitude without accurate longitude position information led to many great naval disasters – the navigators of wooden sailing ships on the high seas, and their Captains, did not accurately know their own position. Easy and accurate measurement of longitude was so important to navigation at sea, that in 1714, Queen Anne of Great Britain, then the world’s most preeminent sea power, established a reward of 20,000 pounds sterling (equivalent to 1.9 million pounds or $2.8 million dollars in year-2000 dollars), to be paid to the first person or persons capable of developing a practical method of determining longitude at sea.

This first sea-worthy, highly accurate chronometer was developed by John Harrison (1693-1776), a carpenter, in 1761. His struggles and tribulations suffered during the decades-long gestation period for his device were chronicled in the excellent movie “Longitude“, which I highly recommend.

Incidentally, the key innovation of Harrison’s seaworthy chronometers, which were later designated as models H1, H2 &  H3, was dubbed the “Grasshopper Escapement“.

In 2008, world-renowed physicist Stephen Hawking (yes, that’s right, the wheelchair-bound, Cambridge, England-residing, computer-voiced supergenius, we all know who he is) unveiled the Corpus Clock, a very unique clock, the work of horologist (a horologist is a clock-maker, or someone who studies chronology. Horo is Greek for “Hour” or “Time”) John Taylor, who has built a disturbing public timepiece[VIDEO]  which utilizes an ‘upgraded’ type of grasshopper escapement. If you are like me, and love to fool around with electronics and electric circuits, you already know John Taylor as the inventor of the thermostatic switch, used in umpteen millions of household appliances.

Harrison’s chronometer also incorporated two other mechanical engineering advances: a gridiron pendulum that consists of lengths of brass and iron arranged in such a way that the length of the pendulum from pivot to bob is always constant, regardless of the temperature. The grasshopper escapement described above, in concert with the other features of the Harrison chronometer, such as lignum vitae (a self-lubricating wood) rollers mounted on non-corroding brass spindles, helped to virtually eliminate friction from the Harrison device.

A British Captain leaving on a long sea voyage would set the Harrison chronometer to the exact same time kept in Greenwich, England. This is the origin of what is known as GMT, or Greenwich Mean Time. GMT is a way of universally telling time across the world. For example, the Eastern Standard time zone, within which Boston is located, can be refferred to as “GMT-5”, which means the time in Boston will be five hours behind the time in Greenwich, England. While I’m on the subject of GMT, know that GMT is referred to as “Zulu” in the U.S. military.  As in “United States Navy SEAL team 6 will deploy to HVT#1 compound in Abbotabad, Pakistan at 0530 Zulu”.

On the high seas, the Captain or Navigator of the vessel would then be able to determine the local time by observing the position of the sun. The difference between the local time and the time in Greenwich, which was maintained throughout the voyage accurately by the Harrison chronometer, could then be used mathematically to derive (24 hours of time is equivalent to 360 degrees of longitude) the ship’s distance from Greenwich, which is longitude.

In 1772, Captain James Cook used a Harrison-styled (for more on the fascinating backstory on Cook’s voyage, I again recommend the movie “longitude”) chronometer to explore and accurately chart the Pacific ocean for the British Admiralty. The Harrison chronometer was a huge advance in navigation, but it only worked in fair weather when the position of the sun in the sky could be observed to determine local time. This restriction was removed with the invention of radio.

Radio signal navigation

The first equipment to be used for radio navigation arrived in 1912, but it suffered from accuracy problems. Pulse radar, developed during World War II, made it possible to measure the short time differences between transmitted and received radio waves. This is the same principle used by police speed trap radars: the equipment sends out a radio pulse and measures the time it takes for the pulse to travel to a vehicle, bounce off it, and arrive back at the radar gun. The time difference tells the radar’s computer the car’s distance from the gun.

The GPS system uses a constellation of 24 active GPS satellites orbiting the Earth

The early radio navigation systems used this same principle of sending radio waves and measuring time differences. In many of the early systems, radio signals were sent from two towers, at exactly the same time, traveling at the same speed. The navigator’s radio receiver would then detect which of the two radio signals arrived at the navigator’s position first, and then would measure the amount of time that would elapse until the arrival of the second radio signal.

The navigator would be aware of the exact positions of the two signalling towers, the speed of the radio waves and the time difference between them when they arrived at the navigator’s position. If the radio waves had reached the navigator’s position at exactly the same time, expressed as [Delta] t=0, the navigator’s position would lie exactly between the two signalling towers. If instead the second radio signal arrived two time units before the first signal, then the navigator would know that their position would be closer to the second signalling tower than the first one.

Two radio signals give the position of the receiver on a line between two radio sources

Of course, this is only a one-dimensional position fix. A one-dimensional position fix is not very useful, but if three radio signalling towers are used, then the radio navigation system is capable of delivering a two-dimensional position fix. As in the previous example, the navigator’s receiver records which signal arrives at the receiver first and the time differences between the first signal and the others. Using this knowledge of the signalling tower’s positions, the speed of the three radio signals and the difference, or delta, in the arrival times of the signals, the receiver calculates a two-dimensional position.

Adding a third radio signal source allows a two-dimensional position fix to be calculated

 

GPS is radio navigation using satellites instead of signalling towers

GPS uses radio waves to determine position, just as in the early radio-based navigation systems like the ones described above, but with an important twist. Land based signalling towers are replaced by satellites orbiting 20,200 kilometers (12,552 miles) above Earth.

These satellites do not transmit radio pulses, however. Instead, the GPS satellites transmit a sequence of numbers that enable a GPS receiver to measure its distance from each satellite instead of its position betweenthe satellites.

Alright, Dear Reader, you know this is the point in my discussion where I’m really going to start breaking it down for you. Remember, in GPS, as in many technological wonders of our modern age, God is in the details. Stay with me, as this technical discussion will reward your dedicated attention span to this article by giving you a more complete understanding of how your GPS receiver operates and solves for position.

I am going to simplify some of the details of the transmitted number sequences in order to provide to you an easily comprehensible example, no hate emails please. This is my disclaimer.

Starting at a known time, t0 in the example I am about to describe, the satellite broadcasts a number sequence. For the purpses of illustration, let us say that the satellite in question sends the number 10 at t0, the number 23 at time t1, etc., and this satellite continues to send a different number each time segment without repeating itself for a millisecond.

GPS satellite sending and GPS receiver detecting a transmitted number sequence

The GPS receiver already has the exact same number sequence stored in its electronic memory and “knows” the exact time when the satellite began to transmit its number sequence. At time t0, the receiver starts at the beginning of the number list in its memory and advances one number for each time segment.

When the GPS receiver detects that number 10 has arrived from the satellite, it notes it is at number 42 in its own list, which means it took seven time segments for the radio wave carrying the numbers to get from the satellite to the receiver. If the radio wave travels 3219km (2000 miles) per time unit, the receiver knows the satellite is 22,531 kilometers (14,000 miles) away. This technique is known as ranging and requires exact time synchronization between the receiver and the satellites in addition to a known number sequence.

But, of course there is a problem with time, and to solve that problem, we need to use one of Einstein’s theories. And no, I’m not making this up.

How GPS bends time

The GPS system uses a constellation of 24 satellites that transmit this time-stamped information on where they are. By multiplying the elapsed time of reception by the speed of light, the GPS receiver can calculate position from each of the satellites it is currently receiving radio signals from.

Each GPS satellite is equipped with an atomic clock, the most accurate type of chronometer available.

For accuracy to within a few meters, the satellites’ atomic clocks have to be extremely precise – plus or minus 10 nanoseconds.

10 nanoseconds? I know, I know, Dear Reader: many of us, myself included, are aware of and can easily comprehend time divisions in the milliseconds. These types of chronological measurements are used in computer programming and applications. But nanoseconds are a much smaller unit used to divide time – and there’s a big problem besides the conceptual challenges associated with grasping such minute time divisions.

These amazingly accurate atomic clocks never seem to run quite right aboard these GPS satellites. One second as measured on each GPS satellite never quite matches a second as measured on Earth.

Wait a second: what the heck? Why not? I thought you said the atomic clocks were the most accurate form of chronometer available…why is it that there are these time differences?

Well, Dear Reader, the answer is that Einstein knew what he was talking about with that relativity stuff. Mind you, I’m not talking about Einstein’s much broader scientific theory of _general_ relativity, I’m speaking of Einstein’s early relativity predictions that were later proven to be observable in the cosmos.

Einstein, relativity, and GPS

Albert Einstein’s special theory of relativity predicts that a clock that is traveling fast will appear to run slowly from the perspective of someone standing still. The GPS satellites move at around 9,000 miles per hour, and this is enough speed to make their onboard atomic chronometers slow down by 8 microseconds per day from the perspective of a GPS receiver.

This is more than enough to completely corrupt the location data. In order to counter this effect, the GPS receiver adjusts the time information it receives from each satellite by using an equation:

GPS receivers use the above equation to correct for time incongruousness that results from Einstein’s theory of special relativity

The amount of time that has elapsed on Earth during the delta time interval of the satellites’ radio transmission segment is equal to the amount of time elapsed as measured on the GPS satellite in question divided by the square root of 1 minus the exact velocity of the satellite (around 9,000 MPH) divided by the speed of light, or 186,262 miles per second.

Yowza!

How GPS uses triangulation to solve for position

Solving for position using GPS satellite radio signals (corrected for time as detailed above) is accomplished by means of triangulation, which means if you know your distance from three fixed locations, you can calculate your own position. I have illustrated in my prior, simplified examples how a navigator can find their position in two dimensions.

In two dimensions, a GPS receiver measures its distance from satellite #1, which means the navigator is somewhere on the conceptual circle of potential positions that surrounds GPS satellite #1. Next, the receiver measures its distance to GPS satellite #2. The GPS receiver must then lie somewhere on the circles of potential positions that surround satellites #1 and #2. There are only two potential positions where the GPS receiver can be located, and each of these two potential positions is where the two circles of position potentialities intersect.

Triangulation is used to find position from GPS satellite receptions

The GPS then measures its distance from GPS satellite #3 and, just as before, the only potential positions for the GPS receiver are where the circles surrounding the three satellites intersect. Using triangulation, there is only one location on the Earth where all three position potentiality circles intersect, so at this point, the GPS receiver has calculated its position.

Piece of cake, right? GPS navigation sounds complex because it is, but fortunately the GPS receiver equipment performs these calculations with great speed and accuracy, hiding all the nasty math it takes to solve for position from the navigator.

GPS solves for latitude, longitude, and altitude too

The GPS system is really super because it uses these three intersecting spheres of position potentialities to determine a three dimensional position comprised of latitude, longitude, and altitude.

My examples above stress the importance of time synchronization and the satellites’ exact positions. This will help me explain the concept of Selective Availability, or SA, later in this article. For now, just remember that if the receiver is not exactly synchronized to the satellites or if it does not know the satellites’ precise positions, the position the GPS receiver calculates will be inaccurate.

Signals from just three GPS satellties are enough for a GPS receiver to calculate its position, but a fourth GPS satellite signal is used to synchronize the time between the satellites’ highly accurate onboard atomic clocks and the less accurate quartz chronometer onboard the GPS receiver itself.

If radio signals from only three GPS satellites are available, one signal must be used to synchronize time, leaving only two signals to calculate a two-dimensional position.

The knowledge of the GPS satellites’ exact positions is the other vital aspect of positioning with GPS signals. A GPS receiver would not be able to accurately determine its own position with simply the radio signal time difference information from the GPS satellites, it must also know the exact positions of the GPS satellites in order to determine its own location on the Earth. Each satellite knows its own position, as well as the positions of all of the other GPS satellites in the GPS satellite constellation, and each satellite sends this orbital position information down to the GPS receiver.

GPS satellite position information

As I explained above, the GPS receiver has a list of satellite positions that is transmitted to it from the GPS satellites, but how does it know the positions of the GPS satellites when the GPS receiver is turned off or is moved while the power to the GPS receiver is turned off? How will it know where the satellites are when it is turned back on?

If a GPS receiver has been turned off for more than six months or has been moved more than 300 miles while it was turned off, then its internal almanac is inaccurate and cannot be used. Fortunately, all the GPS satellites transmit an updated orbital position almanac with regularity.

When a GPS receiver is turned on, it initially performs a check on its latest received orbital position almanac. If the GPS receiver determines that this almanac makes no sense according to a set of predefined parameters, then the GPS receiver will wait until it receives a new almanac so it can then calculate its position.

This delay between when a GPS receiver is turned on and when it calculates its position is called Time To First Fix (TTFF). Sometimes solving for the first position fix reading can take a while, and the reason behind this is usually that the GPS receiver is waiting for a new almanac from the GPS satellites.

So far in my discussion of the GPS system, I have spoken only of two entities: the satellites and the GPS receiver users. The third component of the GPS system is ground control. Ground stations monitor the satellite positions, control the satellites and determine the overall GPS system health.

Ground control also maintains the up-to-date orbital positioning almanac that is beamed to the GPS satellites, and in turn, down to the GPS receiver units on the Earth.

The United States military, Navstar, WAAS, the ionosphere and Selective Availability

How’s the above for a section title heading? A mouthful?

The concept of the GPS system was conceived of in 1960 to increase the accuracy of intercontinental ballistic missiles. Just another example of your tax dollars at work for you Americans out there, and another example of military space technology come down to Earth in the form of some civilian technology innovations.

The U.S. Air Force began the development of the GPS system and called it the Global Positioning System. Soon afterwards, the other branches of the U.S. military became involved in the development of the GPS system and the Pentagon changed the system’s name to Navstar – a name that did not stick. The entire system cost nearly $10 billion to develop and was fully operational in April, 1995.

Eighteen satellites is the minimum number needed to cover the entire Earth, but the actual number of GPS satellties that make up the GPS constellation in orbit flunctuates between 24 and 29 due to factors such as maintenance of spare satellites and upgrading of GPS satellites.

The GPS system designers had to deal with an interesting problem that affected the accuracy of the radio waves being transmitted through the Earth’s ionosphere and down to the GPS receivers on Earth. The Earth’s ionosphere slows down the satellite radio waves and would potentially affect the accuracy of the position data determined buy the GPS receivers.

The GPS system designers used two techniques to overcome the error in the GPS radio signals introduced by the ionopshere, one for civilian GPS receivers, and one for military receivers.

You see, the GPS system was designed by the U.S. military, and there was a very valid concern that the system could be used by America’s enemies as well as the U.S. military. After all, any country or group could potentially receive the radio signals beamed down to the Earth by the GPS satellites. As these radio waves are simply radio broadcasts, there is no point-to-point secure radio transmission to ensure who is using the GPS system and for what purpose they are using it.

GPS satellites beam two types of radio signals down to the GPS receivers: Precision Code, or P Code, and CA Code, or Coarse Acquisition Code. The CA code was public, and any receiver could detect it. The P code was made so complex that only military receivers, known as authorized users, can detect and use it for navigation.

The CA code is transmitted at 1575.42 MHz, which is called the L1 frequency. Each civilian GPS receiver is programmed with with a model that reports how much the L1 signal slows down when it hits the ionosphere. Based on this model, the GPS receiver can correct for ionospheric interference.

The solution for military receivers is more complex, but this complexity brings with it more accuracy. The P code is transmitted on the L1 frequency and at 1227.6 Mhz, which is called L2. Radio waves of different frequencies slow down differently when they hit the Earth’s ionosphere, so military GPS receivers compare the delay between L1 and L2 to figure out how much they slowed down. Comparing two signals is more accurate than using an ionosphere model because the model may be slightly off for any given GPS receiver location, whereas the comparison of the L1 and L2 radio signals is always accurate.

WAAS, or Wide Area Augmentation System, correction data decreases the impact of an inaccurate ionosphere model in civilian GPS receivers.

Selective Availability, WAAS, and Differential GPS

P code is inherently more accurate than the CA code transmitted by the GPS satellties, so military GPS receivers are generally accurate to 1 meter, or 3.3 feet. The CA code provides accuracy to about 15 meters, or 49.2 feet, which is less accurate than the P code, but still accurate enough to be deadly in the hands of the wrong people, so the GPS system designers decided to limit the usefulness of the CA code by making it deliberately less accurate than it was designed to be. The policy of deteriorating the CA code accuracy is called Selective Availability (SA).

Selective Availability randomly introduced position error into the CA code. The deliberate error changed the accuracy of a civilian (unauthorized) receiver from 15 meters (49.2 feet) to somewhere between 15 meters (49.2 feet) and 100 meters (328 feet). Selective Availability was a nuisance and as GPS use spread, many people complained. As a result, on May 2, 2000 the U.S. military eliminated SA at the behest of the U.S. government. Civilian GPS receivers are now nominally accurate to 15 meters, or 49.2 feet, as a result of this SA elimination.

While SA was still being enforced, some clever people developed a way around it using a technique known as Differential GPS. DGPS detects and eliminates the random error of SA and makes civilian receivers accurate to approximately 5 meters (16.4 feet). Since the removal of SA, DGPS is still used because it still increases civilian receiver accuracy, but is quickly being replaced by WAAS.

The FAA, or Fedral Aviation Administration in the United States determined that GPS is not accurate enough to be used for aviation, so it has added a form of differential GPS called WAAS to increase accuracy.

WAAS is more accurate than current DGPS services, it is available for locations in the U.S., parts of Canada and Mexico and it is free. GPS receivers equipped with WAAS have increased accuracy, from 15 meters (49.2 feet) to 3 meters (9.8 feet). unfortunately, the WAAS correction signals are most easily received in flat, open spaces, so you may not be able to pick tthem up in mountainous terrain.

Still more to come from the Hub Tech Insider on GPS – Build Your Own GPS

In upcoming articles, I will delve much deeper into DGPS, or Differential GPS, and illustrate for you some of the very innovative ways in which DGPS has been employed in applications such as coastal sea or littoral navigation, and even how it is used to keep track of the maintenance needs of huge public works such as dams and bridges.

I am also really excited about an upcoming feature I am hard at work on, a more hands-on article in which I will demonstrate how you can build your very own, battery-powered GPS receiver. I will then take you through some very basic computer programming code, written in a language called Processing, also known as Wiring, which is very similar to the C computer language. (C is one of my favorite computer programming languages, I must admit. Any computer programming language that utilizes a C-type syntax, I enjoy working with – except Java. No hate mail.)

The built-from-scratch GPS unit I will demonstrate how to build is portable, battery powered, and includes a bluetooth module for communicating wirelessly with your PC. You will be able, once you have followed my complete and detailed instructions for constructing this GPS device, to poll the GPS module for data, known as sentences in GPS parlance, to extract location data in the NMEA protocol. NMEA stands for National Maritime Electronics Association, and the NMEA protocol is fundamental to digital, programmatic interactions with GPS modules.

As you can well imagine, I intend to break it all down as usual, and I think it will be fun to finally post some of my terrible computer programming code. As you know, I am not a professional programmer, but a PMO Director, but for some strange reason, ever since I was a little, tiny tiny nerd boy, I have been programming computers. I just don’t talk about it too much, because programming computers is just something I have always done, not something I hang out my shingle on, so to speak.

Anyways, don’t worry, just like the code I write for ecommerce sites, this GPS code is simple, easy to follow, and always, always runs. When you see the latitude and longitude information flowing from this device wirelessly onto your PC, you will be very happy. There is nothing like getting involved hands-on with technology to increase your understanding of difficult tech concepts, and I hope you will have as much fun building your own wireless bluetooth GPS module as I did.

Homemade, battery-powered GPS unit with Bluetooth module on a solderless breadboard

I will also show you how to run this GPS code on multiple platforms, such as Linux and Mac OS X, not just Windows PC machines. I always like to get my code running on all three platforms if possible – I will demonstrate how to get your development environment running for Processing coding on all three types of personal computer systems.

You probably already know that a complete glossary of GPS technology terminology is in the works too here at the Hub Tech Insider.

Want to know more?

You’re reading Boston’s Hub Tech Insider, a blog stuffed with years of articles about Boston technology startups and venture capital-backed companies, software development, Agile project management, managing software teams, designing web-based business applications, running successful software development projects, ecommerce and telecommunications.

About the author.

I’m Paul Seibert, Editor of Boston’s Hub Tech Insider, a Boston focused technology blog. You can connect with me on LinkedIn, follow me on Twitter, even friend me on Facebook if you’re cool. I own and am trying to sell a dual-zoned, residential & commercial Office Building in Natick, MA. I have a background in entrepreneurship, ecommerce, telecommunications and software development, I’m a PMO Director, I’m a serial entrepreneur and the co-founder of  Tshirtnow.net.

What is EDIINT? What is AS2, and how does it differ from AS3 or AS4? November 2, 2010

Posted by HubTechInsider in Definitions, Manufacturing, Supply Chain Management.
Tags: , , , , , , ,
add a comment
Internet Map. Ninian Smart predicts global com...

Image via Wikipedia

What is EDIINT? What is AS2, and how does it differ from AS3 or AS4?


EDI, or Electronic Data Interchange, is a format used by large enterprises for exchanging digital information about purchase orders, invoices, and other business supply chain related information with other companies, businesses, and enterprises.


EDIINT stands for EDI over INTernet.


One of the concerns and needs of the large business enterprises using EDI for electronic transactions throughout the 1990’s was the burgeoning requirement from these enterprises to be able to exchange EDI formatted data streams over the public Internet, securely. Towards the late 1990’s, EDIINT using a secure digital transmission conduit over the public Internet, called AS1, technology was standardized and released by the web standards bodies.


The AS1 protocol leveraged SMTP (standard Simple Mail Transport Protocol, or Internet email) as the foundation for exchanging communications. During this early phase of EDIINT deployments and AS1 protocol adoption, several software vendors emerged, offering to eliminate the de rigeur (for the time) VAN (Value-Added Network) fees that were commonly levied against large enterprises by the VANs then in existence. The development of the AS1 protocol, which allowed transfer of EDI messages and transactions securely over the public Internet, should have enabled these large enterprises to use AS1 to connect point-to-point with each other securely over the public Internet without need of VANs or their fee structures.


But although the ideal of AS1 was certainly promising, the promised elimination of VAN network access fees never really materialized, and the AS1 protocol unfortunately did not encounter widespread adoption and acceptance by the larger enterprises’ IT organizations. Several common reasons were behind this shunning of AS1 by corporate IT departments. One reason was the fear of larger enterprises that moving away from the liability endemnification of the VAN networks to transmissions over the (albeit secured) public Internet using AS1 was not quite ready for wholesale adoption by large scale enterprises in mission critical transaction environments. Another reason was some corporate IT departments were fearful, with considerable justification, of overloading enterprise email servers with EDI traffic as a result of the AS1 protocol’s dependence upon secured SMTP packets, which would route through corporate Microsoft Exchange or other SMTP email servers. In addition, SMTP email did not encorporate enough feature robustness to ensure the real time delivery of SMTP email and, more critically, enforce the non-repudiation features of the EDI standards then in common use.


The next incarnation of EDIINT emerged in 2001 with the new AS2 protocol superceding the earlier AS1. AS2 was designed from the start to address the same needs and requirements of the earlier AS1 protocol, but with the major distinction that AS2 was based upon the HTTP protocol instead of AS1’s reliance on the SMTP protocol. AS2’s use of HTTP instead of SMTP provided a more direct and realtime connection for transmitting EDI data between companies. The use of HTTP, combined with the growing acceptance of the Internet as a serious venue for international commerce, led to AS2 gaining a much stronger foundation upon deployment and saw AS2 gain a significant foothold into corporate IT departments in terms of adoption and implementation that AS1 had never enjoyed. But although interest in AS2 was greater than it had been for AS1, AS2 still did not reach mainstream wholesale adoption from large corporate enterprises.


Walmart and the adoption of AS2


The lack of enthusiasm at the corporate level for AS1 and AS2 adoption largely came about because of the lack of a “Market Maker”, or a powerful intermediary enforcing adoption and deployment of AS2 for EDIINT. Two companies were required to decide together to use a protocol such as AS1 or AS2, as either protocol necessitates coordination on both ends. This meant that although an enterprise might make the decision to work with a significant partner or primary systems integrator to deploy AS2, for most of that enterprises’s supplier, customer and vendor business relationships, the payoff would hardly be worth the effort.

All of this changed overnight in 2002 when Walmart announced that their entire EDI transactions and transmissions program would be moving over to the AS2 protocol and that *all* of their suppliers were expected – required absolutely, in typical Walmart fashion – to follow suit. Walmart’s decision was the tipping point for AS2’s widespread adoption and deployment across many industries and enterprises of various scale. Walmart’s reputation as a supply chain industry thought leader, as well as their renowned strong-arm tactics with their suppliers and vendors, forced other large enterprises to follow their lead. Walmart’s dictat led to positive feedback loops and various other network effects as a large number of Walmart suppliers fully AS2 enabled led to a growing ecosystem of AS2 -enabled vendors and supplies in the marketplace. Thus it became even easier for recalcitrant suppliers to justify jumping into the EDIINT, AS2 pool. AS2 enabled suppliers were able to easily extend their transactional AS2-based EDIINT systems into a vibrant community of AS2 enabled enterprises. As a result, by 2003 AS2 became one of the most popular data protocols for EDI transmissions within North America.


Europe and the Odette File Transfer Protocol V2, or OFTP V2


Despite the rapid spread of AS2 in the United States, Canada and Mexico, however, AS2 adoption lagged in Europe. The major reason for the discrepancy of AS2 adoption rates between North America and Europe was the lack of a European market maker ala Walmart in the United States. Without a key champion like Walmart driving the rapid adoption of AS2 in Europe, AS2 usage has taken a much longer time to spread into Europe’s major enterprises.


Into this vacuum, a new standard has emerged in Europe which may supplant the adoption of AS2 entirely if enough enterprises of scale in Europe decide to adopt it. The standard’s name is Version 2 of the Odette File Transfer Protocol, or OFTP V2, and it is a very similar protocol to AS2 in the fact that it leverages both the public Internet and HTTP for connectivity. In Europe, large automotive enterprises such as Volkswagen, Volva and PSA are driving the adoption of OFTP V2 in an industry-wide effort to reduce costly VAN networking fees. This wave of automotive suppliers supporting OFTP V2 should follow a similar pattern, although perhaps on not quite as large a scale, to the adoption of AS2 in North America by retail suppliers and vendors in response to Walmart’s urgings and data integration requirements.


Future EDIINT Standards: AS3 and AS4 and SOA


Future standards likely to emerge within the next iterations of EDIINT are likely to include AS3, which is based upon FTP, and AS4, which is based upon web services. Each of these newer variants contains benefits not available to users of AS2, for instance, AS3 does not require an ‘always on’ connection and could potentially handle large files better than AS2. AS4 can integrate with SOA (Services Oriented Architecture) software infrastructures with relative ease, something that is prohibitively difficult at present with AS2. Despite these technological advances, if a large enterprise or company is trying to determine which protocol is more apropos to use for EDI transmissions, they are likely to choose AS2 despite its limitations simply because the large community of companies already using AS2 versus trying to forge an uncertain path trailblazing the use of AS3 or AS4 in the absence of a market maker as mentioned above.


So until another market maker emerges to drive the adoption of AS3 or AS4 as Walmart did with AS2, AS2 will continue to be the de facto standard for EDI transmissions over the Internet. Instead of companies and large enterprises across different industries moving to AS3 or AS4, AS2 is instead adopting features that address the benefits available in those other standards. For example, an effort is under way to add “Restart” capability to AS2 that was announced recently, and this would provide some of the better support for larger file transfers that we have seen in AS3.


HubTechInsider.com YouTube Channel

Subscribe to HubTechInsider.com YouTube Channel

SEO Made Easy 2013 FREE Special Report!

PHP for Beginners

Google + Domination for Business

LinkedIn for Business Training Course

Mastering WordPress Video Training Course

Twitter Business Magic Video Tutorial Series

Want to know more?

You’re reading Boston’s Hub Tech Insider, a blog stuffed with years of articles about Boston technology startups and venture capital-backed companies,software developmentAgile project managementmanaging software teams, designing web-based business applications, running successful software development projectsecommerce and telecommunications.

About the author.

I’m Paul Seibert, Editor of Boston’s Hub Tech Insider, a Boston focused technology blog. You can connect with me on LinkedIn, follow me on Twitter, even friend me on Facebook if you’re cool. I own and am trying to sell a dual-zoned, residential & commercial Office Building in Natick, MA. I have a background in entrepreneurshipecommercetelecommunications andsoftware development, I’m the Director, Technical Projects at eSpendWise, I’m a serial entrepreneur and the co-founder of Tshirtnow.net.

What is a User Story? How are they used in Requirements Gathering and in writing User Acceptance Tests? October 3, 2010

Posted by HubTechInsider in Agile Software Development, Definitions, Project Management.
Tags: , , , , , , , , , , , , , , , , ,
add a comment
user stories image

Image via Wikimedia

What is a User Story? How are they used in Requirements Gathering and in writing User Acceptance Tests?

User Stories are short conversational texts that are used for initial requirements discovery and project planning. User stories are widely used in conjunction with agile software development project management methodologies for Release Planning and definition of User Acceptance Criteria for software development projects.

User Goals, stated in the form of User Stories, are more closely aligned with Business Priorities than software development Tasks and so it is the User Story format which prevails in written statements of User Acceptance Criteria.

An Agile Project Team is typically oriented to completing and delivering User-valued Features rather than on completing isolated development Tasks.These development Tasks eventually combine into a User-valued Feature).

User Goals are not the same things as software development Tasks. A User Goal is an end condition, whereas a development Task is an intermediate process needed to achieve this User Goal. To help illustrate this point, here are two example scenarios:

1. If my User Goal is to laze in my hammock reading the Sunday Boston Globe newspaper, I first have to mow the lawn. My Task is mowing; My Goal is resting. If I was able to recruit someone else to mow the lawn, I could achieve my Goal without having to do the mowing, the Task.

2. Tasks change as implementation technology or development approaches change, but Goals have the pleasant property of remaining stable on software development projects. For example, if I am a hypothetical User traveling from Boston to San Francisco, my User Goals for the trip might include Speed, Comfort and Safety. Heading for California on this proposed trip in 1850, I would have made the journey in a high technology Conestoga wagon for Speed and Comfort, and I would have brought along a Winchester rifle for Safety. However, making the same trip in 2010, with the same User Goals, I would now make the journey in a new Boeing 777 for updated Speed and Comfort and for Safety’s sake I would now leave the Winchester rifle at home.

· My User Goals remained unchanged, however the Tasks have changed so much that they are now seemingly in direct opposition. User Goals are steady, software development Tasks as stated on SOWs (Statements Of Work) are transient.

· Designing User Acceptance Criteria around software development Tasks rarely suits, but User Acceptance Criteria based on User Goals always does.

A User Story is a brief description of functionality as viewed by a User or Customer of the System. User Stories are free-form, and there is no mandatory syntax. However, it can be useful to think of a User Story as generally fitting this form:

“As a <type of User>, I want <Capability> so that <Business Value>”.

Using this template as an example, we might have a User Story like this one:

“As a Store Manager, I want to search for a Service Ticket by Store so that I can find the right Service Ticket quickly”.

User stories form the basis of User Acceptance Testing. Acceptance tests can be created to verify that the User Story has been correctly implemented.

User Story Card

Want to know more?

You’re reading Boston’s Hub Tech Insider, a blog stuffed with years of articles about Boston technology startups and venture capital-backed companies,software developmentAgile project managementmanaging software teams, designing web-based business applications, running successful software development projectsecommerce and telecommunications.

About the author.

I’m Paul Seibert, Editor of Boston’s Hub Tech Insider, a Boston focused technology blog. You can connect with me on LinkedIn, follow me on Twitter, even friend me on Facebook if you’re cool. I own and am trying to sell a dual-zoned, residential & commercial Office Building in Natick, MA. I have a background in entrepreneurshipecommercetelecommunications andsoftware development, I’m the Director, Technical Projects at eSpendWise, I’m a serial entrepreneur and the co-founder of Tshirtnow.net.

What’s the difference between a Graphic Designer, an Information Architect and an Interaction Designer? September 15, 2010

Posted by HubTechInsider in Agile Software Development, Definitions, Ecommerce, Mobile Software Applications, Project Management, Social Media, Software, VoIP, VUI Voice User Interface, Wireless Applications.
Tags: , , , , , , , , , , , , , , , ,
add a comment

Information Architecture is the study of the organization and structure of effective web systems. Information architects study and design the relationships between internal page elements, as well as the relationships and navigation paths between individual pages. They combine Web design, information and library science as well as technical skills to order enterprise knowledge and design organizational systems within websites that help Users find and manage information more successfully. They are also responsible for things like ordering tabs and content sections of a web-based software application.  They try to structure content and access to functions in such a way as to facilitate Users finding paths to knowledge and the swift accomplishment of their User Goals with the System.

Graphic Design is the skill of creating presentations of content (usually hypertext or hypermedia) that are delivered to Users through the World Wide Web, by way of a Web browser or other Web-enabled software like Internet television clients, micro blogging clients and RSS readers. Graphic designers study and design graphic elements, logos, artwork, stock photography, typography, font selection, color selection, color palettes and CSS styles.


Interaction Design is the process of creating an interface for the user to engage with a site or application’s functionality and content. Interaction designers are concerned mainly with facilitating users’ goals and tasks, and use a systematic and iterative process for designing highly interactive user interfaces. Their methodology includes research and discovery techniques such as requirements analysis, stakeholder analysis, task analysis, as well as prototyping, inspection and evaluation methods to define the structure and behavior of a web-based software system.


What’s the difference between Design and User Experience?

  • Design is about changing understanding; user experience is about changing behavior.
  • Design is about intent; user experience is about purpose.
  • Design is about style; user experience is about substance.
  • Design is about the platform; user experience is about the person.
  • Design is about the present; user experience is about the past and future.
  • Design is about action; user experience is about impact.

The Hub Tech Insider Glossary of Mobile Web Terminology August 21, 2010

Posted by HubTechInsider in Definitions, Mobile Software Applications, Wireless Applications.
Tags: , , , , ,
1 comment so far
Image representing iPhone as depicted in Crunc...

Image via CrunchBase

Well, as all of my regular readers know, and most casual readers of these pages can probably easily surmise, I am an ecommerce guy.

I have been designing, programming, managing, and just about everything-ing, ecommerce sites and companies for well over 15 years at this point.

I started my first ecommerce site in 1994. My first web site was an ecommerce site, the third web site in the US state in which I was living at the time. So building online stores is something I am super passionate about.

Sometime ago, probably around 2003 or 2004, I became convinced of the inevitability of the mobile web, and mobile web browsing for ecommerce sites.

I never really believed that the mobile browsing and online purchasing experience, or typical use case, for mobile browsing would be the same as the browsing experience on the desktop PC-based web. It just seemed to me that the mobile version of an ecommerce (or any other content-serving web site, for that matter) site would have to be optimized for a person on-the-go.

The appearance of the Apple iPhone really got me fired up about the mobile web, because I saw Apple driving mobile browsing to the fore of the public’s attention. There were several other factors that were, to my mind, inevitably driving the adoption of mobile web browsing.

So I set out to learn everything I could about mobile browsing, browsers, devices, standards, everything about mobile ecommerce and mobile web design.

At this point (summer 2010), I have set up several mobile versions of ecommerce sites. The mobile version of one of  my latest ecommerce projects, tshirtnow.net, is currently responsible for around 9% of that site’s orders, which I find amazing. I expect this number to grow over time.

My employer, eSpendWise, (I am Director of Technical Projects there) is in the midst of developing a very thoughtful mobile portal into the eSpendWise ecommerce and eProcurement platform used by many Fortune 100 companies, like Apple, Inc., Nike, and others. Optimizing the mobile portal for the nomadic browsing experience (picture a store manager approving a shipment of cleaning supplies on their smartphone while running to help a cashier) while still preserving the power and flexibility of the eSpendWise platform, as you might well be able to imagine, dear reader, is a challenging task to say the least.

A recent study by mobile commerce analysts at Morgan Stanley projected that within five years, the number of user accessing the net from mobile devices will surpass the number who access it from PCs.

Because the screens are smaller, such mobile traffic is trending to be driven in the future by specialty software, mostly apps, designed for a single purpose. For the sake of the optimized experience on mobile devices, many users will forgo the general purpose browser for specialized mobile applications. Users want the Net on their mobile devices, but not necessarily the Web. Fast and easy (specialized purpose-built mobile applications) may eventually win out over flexible (the current desktop browser-oriented world wide web).

One thing I recommend is designing to web standards for your mobile applications or portals. In this way, you have the best shot at “future proofing” your mobile optimized content and applications.

During the writing of Functional Specifications for some of the mobile projects I have been involved with or responsible for, I have created a Glossary of mobile web terms and terminology I wanted to share with my HubTechInsider.com readers so that it may serve as a reference for their own mobile web design efforts.

Please don’t hesitate to send me an email with any questions or additions / corrects you may have for me, and please send me a short note with links / information about your own mobile web design efforts!

The Hub Tech Insider Glossary of Mobile Web Terminology

3G – 3G stands for Third Generation and refers to the latest phase in mobile technology. 3G enables much faster connections to the Internet so that you can get richer multimedia experiences such as video messaging.

4G – 4G stands for Fourth Generation and is a somewhat vague term used to describe wireless mobile radio technologies that offer faster data rates than current 3G (third generation) technologies. 4G networks are also more data-centric and based on standard Internet technologies such as IP. Voice service is typically provided using a special form of VoIP. WiMAX and LTE are examples of 4G technologies.

A-GPS – Assisted Global positioning System. This is a mobile-based location technology. The mobile uses A-GPS to work out location with the help of both GPS satellites and local network base stations.

AFLT (Advanced Forward Link Transmission) – AFLT is a mobile-based location technology. AFLT does not employ GPS satellites to work out locations. Instead, the phone measures signals from nearby cellular base stations and reports the time/distance readings back to the network which is then able to work out your location.

BROWSER – Software that allows you to view Internet content on a web-enabled device.

cHTML, C-HTML, Compact HTML – cHTML is a subset of HTML for i-mode browsers.  cHTML is used only in Japan. cHTML is considered technical superior to WML. cHTML was replaced at W3C by XHTML Basic.

CTI (Computer Telephony Integration) – CTI is an optional set of applications that integrate your business’ telephone system with a computer.  Features can include video conferencing, one-click dialing, incoming call routing, and a variety of other timesaving features that could be appealing to large businesses.

EDGE (Enhanced Data Rates for GSM Evolution) – This is an enhanced modulation technique which increases network capacity and data rates in GSM networks.

FEATURE PHONE – A cell phone with lightweight web features, not smartphones.

GSM (Global System for Mobile) – This is the digital network that mobile phones have used to make calls and send text messages, as well as the standard network available across much of the world. The data connection to the mobile internet is a phone call (similar to a fixed line modem) and it is billed relative to the duration of the call.

HDML(Hyper Device Markup Language) Computer language format used to create wireless websites. HDML is the oldest markup language for display on mobile devices (circa 1996). HDML has a very simple syntax. HDML was never standardized, but was influential in the development of WML. No longer used on mobile phones in North America and Europe.

iDEN – a mobile telecommunications technology, developed by Motorola, which provides its users the benefits of a trunked radio and a cellular telephone. iDEN places more users in a given spectral space, compared to analog cellular and two-way radio systems, by using speech compression and time division multiple access (TDMA). iDEN is an enhanced specialized mobile radio network technology that combines two-way radio, telephone, text messaging and data transmission into one network.

i-mode – NTT DoCoMo proprietary wireless Internet service. Provides mobile devices access to web, e-mail and packet data. NTT DoCoMo I-mode is available only in Japan.

IMEI (International Mobile Equipment Identifier) – This is 15-digit number which identifies an individual phone to the network operators.

Java (J2ME: Java 2 Micro Edition) – Java or J2ME (Java 2 Micro Edition) enables users to download tailor-made software applications onto their phones e.g. mobile games.

LTE (Long-Term Evolution) – An effort to develop advanced wireless mobile radio technology that will succeed current 3G WCDMA/HSDPA/HSUPA technology. Although “LTE” is not the name of the standard itself, it is often used that way. The actual standard is called 3GPP Release 8. LTE is considered by many to be a “4G” technology, both because it is faster than 3G, and because it uses an “all-IP” architecture where everything (including voice) is handled as data, similar to the Internet.

MMS (Multimedia Messaging Service) – Also referred to as picture messaging, MMS works much like text messaging but with a greater capacity so you can send larger quantities of text as well as attaching images and audio files from your phone.

NATIVE APPLICATION – Mobile phone software compiled into a compatible binary format, stored in phone memory and run locally on the device. I.e. web browser, email reader, phone book.

PORTAL – A website accessed by desktop or wireless device that provides a wide selection of information from a single place.

PREDICTIVE TEXT (T9: Text on Nine Keys) – Predictive text allows you to enter text by pressing only one key per letter. When you try and text in a word, the phone will automatically compare all of the possible letter combinations against its own dictionary and predict which word you intended to type.

ROAMING – Making or receiving calls (or using wireless data services) outside your home airtime rate area. Additional fees may apply, depending on your calling plan.

SERIES 60 / SERIES 40 – Series 60 is based on the Symbian Operating System and is a major platform for smartphones. Series 60 was developed by Nokia for their own smartphones but they also license the platform to other mobile manufacturers. Series 60 mobiles tend to have a large color display and a large amount of memory for storing content. Series 40 phones tend to have smaller screens and less memory.

SIM CARD – This is the small card that slots into the back of a mobile phone underneath the battery. The SIM card controls your phone number and the Network that it works on.

SMARTPHONE – A smartphone is like a combination of a standard mobile phone and a PDA. Smartphones have their own complete Operating Systems but differ from PDAs in that they have a standard phone keyboard for input instead of a touch screen and pen.

SMS – (Short Message Service) Send or receive messages (up to 160 characters each) using your wireless device.  SMS is also known as “Text Messaging”.

SOFT KEYS – Soft keys can be used for many different functions according to what is displayed on your mobile at any one moment e.g. ‘Select’ and ‘Exit’. They are commonly found right under the display.

SYMBIAN – Symbian is made up of a group of companies (Nokia, Ericsson, Motorola, and Psion) who create operating systems for mobiles and personal digital assistants (PDAs).

SYNCHRONIZED ACCESS – Some companies create a scaled-down version of their website for PDAs. A copy of the site is stored on the PDA and updated each time it is placed in its cradle and synchronized.

TEXT MESSAGING – Send/receive messages (up to 160 characters each) from your wireless device. Text Messaging is also known as “SMS.”

TRI-BAND – A GSM mobile of which there are two major types (European and Americas) and supports three of the four major GSM frequency bands. This type of mobile functions in most parts of the world.

U-TDOA (Uplink Time Difference on Arrival) – U-TDOA is a position-location technology for mobile phone networks. It works out your exact location by using triangulation techniques i.e. by measuring your distance from two known points.

UMTS – UMTS is one of the standard technologies used to enable 3G mobile services e.g. video on your phone.

WAP (Wireless Application Protocol) – This is the technology that enables mobile phones to browse the Internet. Open standard for network communication that allows mobile devices to access the Internet. WAP is a lightweight protocol providing primitive Internet support (from a desktop point of view). WAP was criticized for fragmenting the Web into Desktop and Mobile variants.

  • WAP 1.x – WML
  • WAP 2.x – XHTML-MP

WEB APPLICATION – A web application is an application that is accessed via Web browser over the Internet.  Application runs on a web server. Markup documents are typically rendered on the User’s phone. No binary compilation or persistent local storage.

WiMax – (802.16a) WiMax is the trade name for a family of new technologies related to the IEEE 802.16 wireless standards. WiMax has the potential for very long range (5 – 30 miles) and high speeds. The initial version, based on 802.16a, is designed for fixed (non-mobile) applications only, such as a wireless replacement for home DSL or cable modem service.  Newer versions, such as 802.16e, add support for mobility, potentially making WiMax a competitor for certain 3G or 4G cell-phone technologies. WiMax uses OFDM (Orthogonal Frequency Division Multiplexing), an increasingly common type of digital wireless technology that is also used in some digital radio and television standards. WiMax operates at higher frequencies than mobile phone networks. WiMax technology can operate in the 2.5 or 3.5 GHz licensed bands, or in the 5.8 GHz unlicensed band.

WML (Wireless Markup Language)–  Computer language format used to create websites that can be viewed on a wireless telephone or device. WML is a XML-based markup language for mobile phones. WML has a very simple syntax. WML was standardized by W3C. WML is considered to be a legacy markup language for mobile devices. Implements WAP.

WTAI (Wireless Telephony Applications Interface) – A protocol used in conjunction with the Wireless Application Protocol (WAP) to allow a phone number to be linked to a web page.

WURFL (Wireless Universal Resource File) – WURFL is an open source directory and APIs for programmatic discovery of mobile device capabilities.

XHTML – XHTML is a HTML markup language in XML-compliant syntax.

XHTML Basic – W3C-standardized subset of HTML targeted for mobile devices, pagers and set-top boxes.

XHTML-MP – Superset of XHTML-Basic defined by the Open Mobile Alliance industry group. XHTML-MP is considered to be the implementation of WAP 2.0. XHTML-MP is a very popular markup language for mobile devices and carrier sponsored applications and portals.

Want to know more?

You’re reading Boston’s Hub Tech Insider, a blog stuffed with years of articles about Boston technology startups and venture capital-backed companies,software developmentAgile project managementmanaging software teams, designing web-based business applications, running successful software development projectsecommerce and telecommunications.

About the author.

I’m Paul Seibert, Editor of Boston’s Hub Tech Insider, a Boston focused technology blog. You can connect with me on LinkedIn, follow me on Twitter, even friend me on Facebook if you’re cool. I own and am trying to sell a dual-zoned, residential & commercial Office Building in Natick, MA. I have a background in entrepreneurshipecommercetelecommunications andsoftware development, I’m the Director, Technical Projects at eSpendWise, I’m a serial entrepreneur and the co-founder of Tshirtnow.net.

The origin of “BEAT L.A.” ! — The Boston Celtics Are In The NBA Finals! June 1, 2010

Posted by HubTechInsider in Definitions.
add a comment
BEAT L.A. !!

BEAT L.A. !!

You’ll be hearing the simple yet powerful “Beat LA!, Beat LA!, Beat LA!” battle cry all over New England now that the Celtics-Lakers rivalry has been renewed for the Finals.

For most fans, the chant is reminiscent of the playoff games in the old Boston Garden in the 1980s, when Magic Johnson squared off against Larry Bird and the Celtics and Lakers dominated the NBA.

But that’s not when the chant took off in Boston. It actually started as a chant supporting the Philadelphia 76ers.

For Celtics fans of that time period, the 76’ers were the team to beat. Crucial to the understanding of this story, however, is the fact that the Celtics and 76’ers respected each other wholly. So did the fans of both. They were enemies, but they were enemies who had earned their due.

In 1980, Philadelphia beat Boston in the semi-finals, earning a trip to meet the Lakers for the championship. In 1981, Boston beat Philadelphia, coming back from a three-games-to-one deficit. In 1982, they met once again in the semi-finals, and here is where the tale becomes more than just your usual sports story.

As always between these two teams, the 1982 series was an all-out total war. There was little to separate the two squads. The Celtics had Larry Bird, Kevin McHale, Robert Parrish, Cedric Maxwell, Danny Ainge and Tiny Archibald. The 76’ers had Julius Erving, Bobby Jones, Maurice Cheeks, Caldwell Jones, Darryl Dawkins and Andrew Toney. And, again, it came down to a seventh game, this time being played at the old Boston Garden.

The Garden was packed to the rafters, hot and muggy, as it usually was during the later rounds of the playoffs. Both teams battled hard, as they always did. The game went back-and-forth, one team gaining momentum and then the other.

With 26 seconds to go in Game 7 of the 1982 Eastern Conference finals at the old Garden and the Sixers pulling away from the soon-to-be ex-champs, the crowd began to chant, with no prompting from a giant scoreboard, or from cheerleaders, or due to any sort of pre-packaged canned marketing, the now-famous phrase. Philadelphia, after all, would be facing the hated Lakers in the NBA Finals. In the midst of a heartbreaking defeat, they were cheering on their most hated rivals.

“You hear what the crowd is chanting to the Sixers? ‘Beat LA'” said CBS color commentator and Celtics legend Bill Russell as the Sixers were beating Boston 117-105 as the seconds ticked down.

“Beat LA … that’s great,” replied play-by-play man Dick Stockton.

And so it began.

“That was nice,” Series MVP Julius Erving said after that game, according to Sports Illustrated’s Anthony Cotton. “But it wasn’t as loud as ‘See you Sunday,’ was it?”

The “See you Sunday” chant was also made famous during the same series in Game 5 at the Garden, when the Celtics were down three games to one but the Boston fans were sure the Sixers would return to Boston for a deciding Game 7.

The “Beat LA” chant remains one of the most original creations from Boston, rivaling the “Ster-oids, Ster-oids” chants to Jose Canseco at Fenway in 1988. And the “Dar-ryl, Dar-ryl” shouts to Mets outfielder Daryl Strawberry during the 1986 World Series.

Philadelphia would lose to the Lakers in six games in the 1982 NBA Finals, but that didn’t stop the chant from spreading around the nation like a plague without a cure. It was even heard in the Meadowlands when the Ducks, who play in Anaheim, not L.A., faced the New Jersey Devils in the 2003 Stanley Cup Finals. “The fortunate but unfortunate part about the “Beat L.A.” (chant) is that it’s so unoriginal,” Derek Fisher said, breaking into a wry smile when asked if he was looking forward to hearing it from the Boston crowd.

What’s the difference between incentive stock options (ISOs), nonqualified stock options (NSOs), and Restricted Stock? May 27, 2010

Posted by HubTechInsider in Boston Executive Moves, Definitions, Investing, IPOs, Management, Staffing & Recruiting, Startups, Venture Capital.
Tags: , ,
add a comment


What is the difference between the types of stock options? How many different kinds of stock options are there?

I am often asked questions about negociating stock options as part of a Boston high tech or IT job compensation package for an executive or management position. I am a successful entrepreneur and businessman who has handed out and received stock options. I have found to my surprise that prospective employees that I have spoken with regarding this topic leave me with the distinct impression that they have sat through job interviews listening to a company executive or recruiter talk about subjects such as “incentive and nonqualified stock options”, “vesting periods”, “strike price”, and “dilution”, nodding their heads in mute agreement, as if they understood everything.

First off, if you are serious about assessing equity incentives, and stock options in particular, you need to familiarize yourself with the lingo. I have included the Hub Tech Insider’s Glossary of Stock Options Terminology below, at the conclusion of this article. If you want to know the difference between incentive stock options (ISOs), nonqualified stock options (NSOs), and Restricted Stock, then I will attempt to shed some light on the confusion by writing about the different types of stock options.

Incentive Stock Options qualify for preferential tax treatment – the key preference being that the recipient can delay paying taxes on stock acquired by excercising the option until the stock is actually sold. If the recipient sells the stock right away, any gain is treated as ordinary income, which gets taxed at the same rate as your salary; but if the stock is held for a year, any gain qualifies as a capital gain, which is taxed at a maximum of 20%.

It is important to note that incentive stock options can only be granted to employees (as opposed to consultants or other contractors). Nonqualified options can be handed out to consultants, contractors, outside directors, and anyone else the company wants, but the recipient pays taxes on the difference between the excercise price of the option and the value of the shares as ordinary income as soon as the shares are acquired, rather than when the shares are sold. That means the recipient may wind up paying taxes before receiving any money.

Restricted stock can best be thought of as a mirror image of incentive stock options. Instead of being made available for purchase over a period of time, as incentive stock options are, restricted stock is given out all at once when an individual joins a company, usually with the restriction that it be sold or given back to the company if the employee leaves the company before a certain period of time has gone by. The reason more companies are making restricted stock available to certain senior executives is that it offers a potential tax advantage: because executives get their hands on the stock as soon as they join the company, they have a god shot at fulfilling the one-year holding period necessary to qualify for capital gains treatment on any profits from the eventual sale of the stock. Given how fast some companies are going public or are acquired, the capital gains treatment can result in significant tax savings.

The Hub Tech Insider Glossary of Stock Option Terminology:

Above Water – Options allowing the purchase of shares of stock for less than the market price are said to be “above water”.

Authorized Shares – The number of shares of stock available for a company to issue.

Bearish – Having a negative opinion about the future of the stock market.

Bullish – Having a positive opinion about the future of the stock market.

Capital Gains – The profit gained from the sale of an investment, such as stock, which is taxed at lower rates than ordinary income.

Cashless Exercise – Allows an individual to temporarily borrow the money needed to excercise options by selling some of his/her stock in order to cover the cost of the remaining shares.

Cliff Vesting – Allows option holders to excercise some or all of their options at once, such as after the first year of employment, instead of incrementally over a period of several quarters or years. (See Vesting Period)

Equity – Common stock in a company.

Exercise – The act of acquiring stock promised by an option.

Exercise Price – The price at which an option holder may buy shares of stock. Often referred to as the strike price.

Expire – Options are typically granted for a definite period of time. If individuals do not excercise the options before a specified date, they expire (meaning they are forfeited).

Forfeit – Employees forfeit or forego their right to exercise their options by leaving a company before all the options have vested – or by not exercising them before their date of expiration because they are “under water”.

Founders Stock – Shares in a company held by the initial founders, usually subject to certain restrictions as to their disposition.

Fully Diluted Capitalization – The total number of shares outstanding or set aside for issuance (such as shares in a stock option plan).

Immediate Vesting – When one company has been bought by another, all options that have been issued by the acquired company are automatically available for immediate excercising, or vesting.

Incentive Stock Options (ISOs) – ISOs can only be granted to employees, as opposed to outside consultants or contractors. Their advantage is in allowing holders to acquire stock without paying taxes on their gain in value until they sell the stock.

Incremental Vesting – Period of time during which options become vested gradually, such as quarterly, which is specified in an option agreement. Such vesting is also referred to as vesting on an incremental basis.

Initial Public Offering (IPO) – An IPO is a company’s first sale of stock to the public.

Insider – An insider is any officer, director, advisor, or investor of a company that is public or about to go public. Because of his or her inside knowledge of a company’s financial plans, an insider is restricted in trading the company’s stock based on information not disclosed to the public.

Liquidity – How easily an investment holding can be converted into cash. Shares of stock are liquid if there is a ready market for those shares, meaning that the shares are available to be bought and sold. If a company is privately held, the stock is sai to be illiquid.

Lockup Period – A period of time that insiders of a company are required by an underwriter to hold onto shares of stock gained from exercising options before being allowed to sell. Once individuals exercise options, they may not sell these shares for the entire lockup period, often one year.

Long-term capital gains – Profits from an investment held longer than one year. These gains are subject to tax rates that can be as high as 20%.

Nonqualified Stock Options (NSOs) – NSOs can be granted to anyone (employees, outside consultants, contractors, directors, and others). However, the receipient pays taxes on the difference between the price of the options and the value of the shares as soon as the shares are acquired, rather than when the shares are sold.

Offering Statement – A statement prepared by the underwriters and distributed to potential investors before a company goes public.

Option Agreement Letter – Document given by a company to an employee to legally grant options.

Option Grants – The number of shares a recipient can acquire via options.

Ordinary Income – Income subject to regular income tax rates, such as salary.

Par Value – The monetary value shown on a security.

Phantom Stock – Can be converted into real stock at some point in the future when certain predetermined events occur. Often referred to in the context of executive bonus plans tied to increases in a public company’s share price.

Preferred Stock – A class of stock that has advantages over common stock in the event of a sale or liquidation of the company.

Privately Held – A company that is owned by one or several individuals or institutions but not by the “public”. Shares of privately-held companies are said to be illiquid.

Publicly Held – A company is considered publicly held – or owned by the public – if its shares are traded on a public stock exchange (like the New York Stock Exchange or NASDAQ). A company can be publicly held even if the majority of its shares are still owned by the company’s original founders and investors.

Registration Statement – A statement required by the SEC in order for a company to conduct an IPO.

Repricing Options – When companies, usually publicly held, adjust the prices on stock options lower in consideration of a decline in their share prices that may place their employees’ stock options ‘under water’. Companies shy away from this practice because it means incurring an accounting charge against profits.

Restricted Stock – Stock available for purchase immediately upon joining a company, but subject to vesting and other conditions.

Securities and Exchange Commission (SEC) – The federal agency charged with ensuring that the investing public has access to all of the relevant and materail information about every public company traded on a US market.

Shares – Ownership in a company.  Usually referred to as shares of stock.

Shares Authorized – The number of shares of stock that a company is allowed to issue, whether they are outstanding or are held in treasury by the company.

Shares Outstanding – Stock held by investors, as opposed to shares held in the company treasury.

Short-term Capital Gains – Profits from an investment held less than one year. These gains are subject to taxes at regular income tax rates, which often exceed 20%.

Spread – When options are “above water”, the spread is the difference between the grant price and the stock’s market value.

Stock – Equity or ownership ina company commonly referred to as common
stock.

Stock Option Plan – An employee incentive plan that allows employees of a company the option to buy shares of stock in the company at a specified price at some point in the future.

Stock Options – These grant the right, but not the obligation, to buy shares of stock at a specified price within a particular time interval, and with a specific expiration date.

Stock Purchase Plan – A plan to encourage employees to take a personal financial stake in the company by offering shares of stock for purchase at a discount – usually in the range of 10-15% – over their “open market” purchase price.

Stock Split – Companies will often declare a split, often a 2-for-1 split, which will reduce by half the price per share and double the amount of shares outstanding.

Strike Price – The price at which an option holder may buy shares of stock. Often referred to as the exercise price.

Under Water – If an option does not allow the purchase of shares of stock for less than the market price of those shares, the option is said to be “under water”.

Underwriters – Investment bankers who in effect buy a stake in the company and then sell this stake to the public. The underwriter guarantees a minimum price for the sale of the company in return for a premium on the shares sold to the public if demand outstrips supply.

Venture Capital Firms – Investment vehicles funded by wealthy individuals looking to take risky stakes in promising new companies and technologies in return for both control and a share of future profits.

Vesting Period – Period of time during which the option holder is allowed to exercise incrementally more options that have already been granted. Vesting typically occurs over periods of three to five years in corresponding increments of 20% to 30% vested per year.

Warrants – An investment vehicle similar to options, allowing for purchase of stock at a specific price before a particular date or in the future.





Want to know more?

You’re reading Boston’s Hub Tech Insider, a blog stuffed with years of articles about Boston technology startups and venture capital-backed companies, software development, Agile project management, managing software teams, designing web-based business applications, running successful software development projects, ecommerce and telecommunications.

About the author.

I’m Paul Seibert, Editor of Boston’s Hub Tech Insider, a Boston focused technology blog. You can connect with me on LinkedIn, follow me on Twitter, even friend me on Facebook if you’re cool. I own and am trying to sell a dual-zoned, residential & commercial Office Building in Natick, MA. I have a background in entrepreneurship, ecommerce, telecommunications and software development, I’m the Senior Technical Project Manager at eSpendWise, I’m a serial entrepreneur and the co-founder of Tshirtnow.net.

How to be a High Flying Project Manager (or, “How Programmers View Project Managers”) March 16, 2010

Posted by HubTechInsider in Definitions, Management.
Tags:
add a comment

[A touch of Project Manager humor for you today, Dear Readers –Paul]

A man is flying in a hot air balloon and realizes he is lost. He reduces height and spots a man down below. He lowers the balloon further and shouts:

“Excuse me, can you help me? I promised my friend. I would meet him half an hour ago, but I don’t know where I am.”

The man below says, “Yes, you are in a hot air balloon, hovering approximately 30 feet above this field. You are between 40 and 42 degrees North latitude, and between 58 and 60 degrees West longitude.”

“You must be a programmer,” says the balloonist.

“I am,” replies the man. “How did you know?”

“Well,” says the balloonist, “everything you have told me is technically correct, but I have no idea what to make of your information, and the fact is I am still lost.”

The man below says, “You must be a project manager”

“I am,” replies the balloonist, “but how did you know?”

“Well,” says the man, “you don’t know where you are or where you are going. You have made a promise which you have no idea how to keep, and you expect me to solve your problem. The fact is you are in the exact same position you were in before we met, but now it is somehow my fault.”

The true Legend of Waltham’s Bear Hill December 4, 2009

Posted by HubTechInsider in Definitions.
Tags: , , ,
add a comment
Flowers beside the road to the top of Waltham's Bear Hill

Flowers beside the road to the top of Waltham's Bear Hill

In 1637 Samuel Saltonstall was surveying land granted to him for grazing by the City of Boston. As he traveled to the area, he was sidetracked as he passed the area now known as Watertown. As darkness set in, Saltonstall found shelter in the caves in what is known today as Bear Hill in Waltham. During the night, a ferocious eight hundred pound Black Bear attacked Samuel, but with his bare hands alone he wrestled the bear. Saltonstall took over the bear as a pet, domesticated him, and named him Chief Cutstomach after a famous Native American tribe leader of the area. The due started to tour the colony, and henceforth the area srrounding Saltonstall’s legendary match has been known as Bear Hill.

Cliff face beside the road to the top of Waltham's Bear Hill

Cliff face beside the road to the top of Waltham's Bear Hill

Add to FacebookAdd to DiggAdd to Del.icio.usAdd to StumbleuponAdd to RedditAdd to BlinklistAdd to TwitterAdd to TechnoratiAdd to Yahoo BuzzAdd to Newsvine

List of Military Contracting Companies December 1, 2009

Posted by HubTechInsider in Definitions, Military Contracting, Military Contracts, Technology, Venture Capital.
Tags: , , ,
add a comment

List of Military Contracting Companies

    Non Lethal Services

Halliburton
KBR (Kellog, Brown and Root)
SAIC

    Military Consultants

MPRI
Vinnell
Dyncorp

    Private Military Companies

Executive Outcomes
Sandline
Kroll
Triple Canopy
Armorgroup
Aegis
Blackwater

Add to FacebookAdd to DiggAdd to Del.icio.usAdd to StumbleuponAdd to RedditAdd to BlinklistAdd to TwitterAdd to TechnoratiAdd to Yahoo BuzzAdd to Newsvine

What is Theory Y? How is it used as a management style? November 29, 2009

Posted by HubTechInsider in Agile Software Development, Definitions, Management, Project Management.
Tags: , , , , ,
1 comment so far

What is Theory Y? How is it used as a management style?

As I have said on these pages before, I needed to write a few short pieces on some of the different management styles I have encountered in my corporate and professional travels. I want to define each of these management styles so that I can compare and contrast them, as well as serving as reference points for the longer articles on this topic which I am in the process of drafting.

As I have previously stated, the purpose of this litany of alphabetic management styles is not to promote one over another; in fact, I don’t recommend adopting any of these naively. But nevertheless, many individual team members and managers will exhibit some behaviors from one of the above styles, and it is helpful to know what makes them tick. Finally, certain individuals may prefer to be managed as a Theory X or Theory Y type (Theory Z, which I will write about at a future date, is less likely in this case), and it is good to be able to recognize the signs. Moreover, some companies might be implicitly based on one style or another.

The second management style about which I will write is one which will be perhaps less recognizable to many people than the aforementioned “Theory X“: “Theory Y”.

As opposed to Theory X, Theory Y holds that work is a natural and desirable activity. Hence, external control abd threats are not needed to guide the organization. In fact, the level of commitment is based on the clarity and desirability of the goals set for the group. Theory Y posits that most individuals actually seek responsibility and do not shirk it, as proposed by Theory X.

A Theory Y manager simply needs to provide the resources, articulate the goals, and leave the team alone. This approach doesn’t always work, of course, because some individuals do need more supervision than others.

Add to FacebookAdd to DiggAdd to Del.icio.usAdd to StumbleuponAdd to RedditAdd to BlinklistAdd to TwitterAdd to TechnoratiAdd to Yahoo BuzzAdd to Newsvine


Want to know more?

You’re reading Boston’s Hub Tech Insider, a blog stuffed with years of articles about Boston technology startups and venture capital-backed companies, software development, Agile project management, managing software teams, designing web-based business applications, running successful software development projects, ecommerce and telecommunications.

About the author.

I’m Paul Seibert, Editor of Boston’s Hub Tech Insider, a Boston focused technology blog. You can connect with me on LinkedIn, follow me on Twitter, even friend me on Facebook if you’re cool. I own and am trying to sell a dual-zoned, residential & commercial Office Building in Natick, MA. I have a background in entrepreneurship, ecommerce, telecommunications and software development, I’m the Director, Technical Projects at eSpendWise, I’m a serial entrepreneur and the co-founder of Tshirtnow.net.

What is Theory X? How is it used as a management style? November 27, 2009

Posted by HubTechInsider in Agile Software Development, Definitions, Management, Project Management, Staffing & Recruiting.
Tags: , , , , , , , ,
1 comment so far

I needed to write a few short pieces on some of the different management styles I have encountered in my corporate and professional travels. I want to define each of these management styles so that I can compare and contrast them, as well as serving as reference points for the longer articles on this topic which I am in the process of drafting.

I will begin with some of the “Letter Management Styles”, of which there are several. The purpose of this litany of alphabetic management styles is not to promote one over another; in fact, I don’t recommend adopting any of these naively. But nevertheless, many individual team members and managers will exhibit some behaviors from one of the above styles, and it is helpful to know what makes them tick. Finally, certain individuals may prefer to be managed as a Theory X or Theory Y type (Theory Z, which I will write about at a future date, is less likely in this case), and it is good to be able to recognize the signs. Moreover, some companies might be implicitly based on one style or another.

The first management style about which I will write is one which will be recognizable to every person, regardless of professional or personal background: “Theory X”.

Theory X is perhaps the oldest management style and is very closely related to the hierarchical, command-and-control model used by military organizations (of which I am intimately familiar).

One thing I can personnally attest to in regards to the Theory X management style is that it maintains the military organizations’ faith in the fact of the necessity of this approach, as (in the view of Theory X proponents) most people inherently dislike work and will avoid it if they can. Hence, in the Theory X management style, managers should coerce, control, direct, and threaten their workers in order to get the most out of them.

A statement that I recall from a conversation with a prototypical Theory X manager with whom I worked (in a prototypical Theory X organization) with was “people only do what you audit”.

Add to FacebookAdd to DiggAdd to Del.icio.usAdd to StumbleuponAdd to RedditAdd to BlinklistAdd to TwitterAdd to TechnoratiAdd to Yahoo BuzzAdd to Newsvine


Want to know more?

You’re reading Boston’s Hub Tech Insider, a blog stuffed with years of articles about Boston technology startups and venture capital-backed companies, software development, Agile project management, managing software teams, designing web-based business applications, running successful software development projects, ecommerce and telecommunications.

About the author.

I’m Paul Seibert, Editor of Boston’s Hub Tech Insider, a Boston focused technology blog. You can connect with me on LinkedIn, follow me on Twitter, even friend me on Facebook if you’re cool. I own and am trying to sell a dual-zoned, residential & commercial Office Building in Natick, MA. I have a background in entrepreneurship, ecommerce, telecommunications and software development, I’m the Director, Technical Projects at eSpendWise, I’m a serial entrepreneur and the co-founder of Tshirtnow.net.

What is Six Sigma? How is it used, and what does it have to do with the CMM? November 27, 2009

Posted by HubTechInsider in Agile Software Development, Definitions, Management, Manufacturing, Products, Project Management, Technology.
Tags: , , , ,
add a comment

What is Six Sigma? How is it used, and what does it have to do with the CMM?

Developed by Bill Smith at Motorola in 1986, Six Sigma is a management philosophy based on removing process variation. It was heavily influenced by preceding quality improvement methodologies such as Quality Control, TQM, and Zero Defects. Six Sigma is a registered service mark and trademark of Motorola Inc. As of 2006, Motorola had reported over $17 Billion in savings from their own employment of Six Sigma practices throughout their global enterprise. Early corporate adopters of Six Sigma who achieved well-publicized success through the application of six sigma best practices to their enterprises included Honeywell (previously known as AlliedSignal) and General Electric, where Jack Welch famously introduced and advocated the method. By the late 1990s, about two-thirds of the Fortune 500 organizations had begun Six Sigma initiatives with the aim of reducing costs and improving quality.

My own professional experiences with Six Sigma began in the early 1990’s (I had first read about it in a Forbes magazine article in 1988) when I worked in manufacturing environments at Mercedes-Benz USA’s plant in Tuscaloosa (Vance), Alabama as well as Phipher Optical Wire Product’s plant in Tuscaloosa, the same city where the University of Alabama is located. It was in these environments where I was tasked with learning about six sigma and spent many hours in classrooms and factory floor and management workgroups implementing and training for a six sigma blackbelt. Six Sigma Black Belts operate under Master Black Belts to apply Six Sigma methodology to specific projects. They devote 100% of their time to Six Sigma. They primarily focus on Six Sigma project execution, whereas what are known in the Six Sigma universe as Six Sigma Champions and Master Black Belts focus on identifying projects/functions for Six Sigma.

Implementing a Six Sigma program in a manufacturing environment means more than delivering defect-free product after final test or inspection. It also entails concurrently maintaining in-process yields around 99.9999998 percent, defective rates below 0.002 parts per million, and the virtual eradication of rework and scrap. Other Six Sigma characteristics include moving operating processes under statistical control, controlling input process variables as well as the more traditional output product variables, and maximizing equipment uptime and optimizing cycle time. In a six sigma organization, employees are trained and expected to assess their job functions with respect to how they improve the organization. They define their goals and quantify where they are currently, their status quo. Then they work to minimize the gap and achieve “six sigma” (in a statistical sense) by a certain date.

Six Sigma focuses on the control of a process to ensure that outputs are within six standard deviations (six sigma) from the mean of the specified goals. Six Sigma is oftentimes implemented using a system with which I have worked many times: define, measure, improve, analyze, and control (DMIAC). Sometimes this same system is referred to as define, measure, analyze and control, or DMAIC.

Define means to describe the process to be improved, usually through some sort of business process model.

Measure means to identify and capture relevant metrics for each aspect of the process model. I have been in classrooms where this is referred to as “Goal -> Question -> Metric”.

Improve obviously implies changing some aspect of the process so that beneficial changes are seen in the associated metrics, usually by attacking the aspect that will have the highest payback.

Analyze and Control means to use ongoing monitoring of the metrics to continuously revisit the model, observe the metrics, and refine the process as needed.

Although some organizations apparently strive to use Six Sigma as a part of their software quality improvement practices, the issue that often arises is finding an appropriate business process model for the software development effort that does not devolve into a highly artificial simulacrum of the waterfall SDLC (Software Development Life Cycle) process.


Want to know more?

You’re reading Boston’s Hub Tech Insider, a blog stuffed with years of articles about Boston technology startups and venture capital-backed companies, software development, Agile project management, managing software teams, designing web-based business applications, running successful software development projects, ecommerce and telecommunications.

About the author.

I’m Paul Seibert, Editor of Boston’s Hub Tech Insider, a Boston focused technology blog. You can connect with me on LinkedIn, follow me on Twitter, even friend me on Facebook if you’re cool. I own and am trying to sell a dual-zoned, residential & commercial Office Building in Natick, MA. I have a background in entrepreneurship, ecommerce, telecommunications and software development, I’m the Director, Technical Projects at eSpendWise, I’m a serial entrepreneur and the co-founder of Tshirtnow.net.

What is Scrum? How is it used to manage projects and teams? November 25, 2009

Posted by HubTechInsider in Agile Software Development, Definitions, Management, Project Management, Software.
Tags: , , , , , , , , ,
1 comment so far

As I continue to move in the Boston software development / high tech job market and talk to more and more people in the area, I not only come across the term “Scrum” in many job descriptions, but it is a word that is frequently bandied about by both recruiters and hiring managers. It is clear that there is alot of confusion in the Boston area about what “Scrum” really is, and how it relates to Agile.

There is no substitute for the experience of running Scrum daily for years, as I have done. My heartfelt advice to anyone looking to adopt Scrum in their organization is to be flexible, take it easy on the cutsey names, and keep the daily meetings very brief. If you are the “ScrumMaster”, stay organized and lead the conversation around the room, notating all limiting factors, as that becomes your to-do list. Drop me a line with your own insights or comments on Scrum!

Scrum, as some people already know, is a project managemnt methodology named after a contentious point in a rugby match. The Scrum project management method enables self-organizing teams by encouraging verbal communication across all team members and project stakeholders. At its foundation, Scrum’s primary principle is that traditional problem definition solution approaches do not always work, and that a formalized discovery process is sometimes needed.

Scrum’s major project artifact is a dynamic list of prioritized work to be done. Completion of a largely fixed set of backlogged items occurs in a series of short (many of 30 days duration) iterations, or “sprints”.

Every day a brief meeting or “Scrum” is held in which project progress is explained, upcoming work is described, and impediments are raised. A brief planning session occurs at the start of each sprint to define the backlog items to be completed. A brief postmortem or heartbeat retrospective occurs at the end of each sprint.

A “ScrumMaster” (my advice is to never call yourself this in actual human life in an office of programmers and IT personnel…but know the job well and do it well nevertheless if you are the individual who finds themselves in this role) removes obstacles or impediments to each sprint. The ScrumMaster is not the leader of the team, as they are self-organizing, but rather acts as a productivity buffer between the team and any destabilizing influences.

Add to FacebookAdd to DiggAdd to Del.icio.usAdd to StumbleuponAdd to RedditAdd to BlinklistAdd to TwitterAdd to TechnoratiAdd to Yahoo BuzzAdd to Newsvine


Want to know more?

You’re reading Boston’s Hub Tech Insider, a blog stuffed with years of articles about Boston technology startups and venture capital-backed companies, software development, Agile project management, managing software teams, designing web-based business applications, running successful software development projects, ecommerce and telecommunications.

About the author.

I’m Paul Seibert, Editor of Boston’s Hub Tech Insider, a Boston focused technology blog. You can connect with me on LinkedIn, follow me on Twitter, even friend me on Facebook if you’re cool. I own and am trying to sell a dual-zoned, residential & commercial Office Building in Natick, MA. I have a background in entrepreneurship, ecommerce, telecommunications and software development, I’m the Director, Technical Projects at eSpendWise, I’m a serial entrepreneur and the co-founder of Tshirtnow.net.

What is Cao’s Law? June 11, 2009

Posted by HubTechInsider in Definitions, Fiber Optics, Telecommunications.
Tags: , ,
add a comment

Cao’s Law states that the communications spectrum is virtually infinite and that WDM (Wave Division Multiplexing) will allow the information transmitted upon the available spectrum to expand exponentially as the growth of transistors in Moore’s Law. Using less and less power, WDM will allow finer and finer channels of light to transmit more and more data. Cao’s Law states that these lambdas will expand at a rate two to three times the rate of expansion of transistors on an integrated circuit chip as in Moore’s Law. On optical fibers, as opposed to the tradeoffs between power and connectivity in the transistor world, in the optical realm, the tradeoff is between bitrate and channel count. To this point of the technology’s development, we can either pump a high bitrate on each channel or we can transmit lots of channels, but we cannot do both of these things at the same time. Among telecom carriers today, there seems to be a manifestation of Simon Cao’s Law in action in the real world.

What is an ACNA? What is a CCNA code in telecommunications? June 8, 2009

Posted by HubTechInsider in Definitions, Fiber Optics, Telecommunications, Uncategorized.
Tags: , , , ,
add a comment

An ACNA stands for Access Customer Name Abbreviation; It is a three-digit alpha code assigned to identify carriers, both ILECs (Incumbent Local Exchange Carriers) and CLECs (Competitive Local Exchange Carriers), for billing and other identification purposes.

It is closely related to the CCNA code, or the Customer Carrier Name Abbreviation, which identifies the common language code for the IXC (InterExchange Carrier) providing the interLATA facility.

The CCNA reflects the code to be contacted for provisioning whereas the ACNA reflects the IXC to be billed for the service.

Geek T-Shirts, Decals, and more at http://www.tshirtnow.net

Want to know more?

You’re reading Boston’s Hub Tech Insider, a blog stuffed with years of articles about Boston technology startups and venture capital-backed companies, software development, Agile project management, managing software teams, designing web-based business applications, running successful software development projects, ecommerce and telecommunications.

About the author.

I’m Paul Seibert, Editor of Boston’s Hub Tech Insider, a Boston focused technology blog. You can connect with me on LinkedIn, follow me on Twitter, even friend me on Facebook if you’re cool. I own and am trying to sell a dual-zoned, residential & commercial Office Building in Natick, MA. I have a background in entrepreneurship, ecommerce, telecommunications and software development, I’m the Senior Technical Project Manager at eSpendWise, I’m a serial entrepreneur and the co-founder of Tshirtnow.net.

Add to FacebookAdd to DiggAdd to Del.icio.usAdd to StumbleuponAdd to RedditAdd to BlinklistAdd to TwitterAdd to TechnoratiAdd to Yahoo BuzzAdd to Newsvine

What is the Mu-Law PCM voice coding standard used in North American T-Carrier telecommunications transmission systems? June 8, 2009

Posted by HubTechInsider in Definitions, Telecommunications, VUI Voice User Interface.
Tags: , , , , , , , , ,
1 comment so far
Sampling and 4-bit quantization of an analog s...

Image via Wikipedia

Mu-Law encoding is the PCM voice coding standard used in Japan and North America. It is a companding standard, both compressing the input and expanding the data upon opening after transmission. Mu Law is a PCM (Pulse Code Modulation) encoding algorithm where the analog voice signal is sampled eight thousand times per second, with each sample being represented by eight bits, thus yielding a raw transmission rate of 64 Kps. Each sample consists of a sign bit, a three bit segment which specifies a logarithmic rqange, and a four bit step offset into the range. The bits of the sample are inverted before transmission. A Law encoding is the voice coding standard which is used in Europe.

Want to know more?

You’re reading Boston’s Hub Tech Insider, a blog stuffed with years of articles about Boston technology startups and venture capital-backed companies, software development, Agile project management, managing software teams, designing web-based business applications, running successful software development projects, ecommerce and telecommunications.

About the author.

I’m Paul Seibert, Editor of Boston’s Hub Tech Insider, a Boston focused technology blog. You can connect with me on LinkedIn, follow me on Twitter, even friend me on Facebook if you’re cool. I own and am trying to sell a dual-zoned, residential & commercial Office Building in Natick, MA. I have a background in entrepreneurship, ecommerce, telecommunications and software development, I’m the Senior Technical Project Manager at eSpendWise, I’m a serial entrepreneur and the co-founder of Tshirtnow.net.

An explanation of the Nyquist Theorem and its importance to Mu-Law Encoding in North American T-Carrier Telecommunications Systems June 2, 2009

Posted by HubTechInsider in Definitions, Fiber Optics, Mobile Software Applications, Telecommunications, VUI Voice User Interface, Wireless Applications.
Tags: , , ,
add a comment

nyquist100

The Nyquist theorem established the principle of sampling continuous signals to convert them to digital signals. In communications theory, the Nyquist theorem is a formula stating that two samples per cycle is all that is needed to properly represent an analog signal digitally. The theorem simply states that the sampling rate must be double the highest frequency of the signal. So, for example, a 4KHz analog voice channel must be sampled 8000 times per second. The Nyquist Theorem is the mathematical underpinning of the Mu-Law encoding technique used in T-Carrier transmission systems. T-Carrier is used in North American telecommunications networks. In Europe, where E-carrier transmission systems are used, a similar but incompatible theorem, Shannon’s Law, is used in the European A-Law encoding technique. This is the reason why Mu-Law encoding is used in North America and A-Law encoding is used in Europe.

The author of the Nyquist Theorem was named Harry Nyquist. Harry worked in the research department at AT&T and later at Bell Telephone Laboratories. In 1924, he published a paper titled “Certain Factors Affecting Telegraph Speed”, which analyzed the correlation between the speed of the telegraph system and the number of signal values it used. Harry refined his paper in 1928, when he republished his work under the title “Certain Topics in Telegraph Transmission Theory”. It was in this paper that Harry expressed the Nyquist Theorem, which established the principle of using sampling to convert a continuous analog signal into a digital signal. Claude Shannon, the author of Shannon’s Law, cited both of Nyquist’s papers in the first paragraph of his classic paper “The Mathematical Theory of Communication”. Harry Nyquist is also known for his explanation of thermal noise, also sometimes known as “Nyquist noise” as well as AT&T’s 1924 version of a fax machine, called “telephotography”.

His remarkable career included advances in the improvement of long-distance telephone circuits, picture transmission systems, and television. Dr. Nyquist’s professional, technical, and scientific accomplishments are recognized worldwide. It has been claimed that Dr. Nyquist and Dr. Claude Shannon are responsible for virtually all the theoretical advances in modern telecommunications. He was credited with nearly 150 patents during his 37-year career. His accomplishments underscore the excellent preparation in engineering that he received at the University of North Dakota. In addition to Nyquist’s theoretical work, he was a prolific inventor and is credited with 138 patents relating to telecommunications.





Want to know more?

You’re reading Boston’s Hub Tech Insider, a blog stuffed with years of articles about Boston technology startups and venture capital-backed companies, software development, Agile project management, managing software teams, designing web-based business applications, running successful software development projects, ecommerce and telecommunications.


About the author.

I’m Paul Seibert, Editor of Boston’s Hub Tech Insider, a Boston focused technology blog. You can connect with me on LinkedIn, follow me on Twitter, even friend me on Facebook if you’re cool. I own and am trying to sell a dual-zoned, residential & commercial Office Building in Natick, MA. I have a background in entrepreneurship, ecommerce, telecommunications and software development, I’m the Senior Technical Project Manager at eSpendWise, I’m a serial entrepreneur and the co-founder of Tshirtnow.net.

Add to FacebookAdd to DiggAdd to Del.icio.usAdd to StumbleuponAdd to RedditAdd to BlinklistAdd to TwitterAdd to TechnoratiAdd to Yahoo BuzzAdd to Newsvine

%d bloggers like this: