Tags: maynard, supercomputers, Venture Capital, waltham
add a comment
SciCortex, a high-efficiency supercomputer maker based in Maynard that employed around 80 people, is shutting its doors and ceasing operations. The company was backed by two venture capital firms: Cambridge, Mass.-based Flagship Ventures and Waltham, Mass.-based Polaris Venture Partners. The company had raised a $5 million venture round in January, and had raised a total of $37 million.
SciCortex’s primary product was a high powered computer needed for complex computations and scientific discovery that, it was claimed, was up to 80 percent more efficient than traditional supercomputers. Some of the company’s supercomputer units apparently could be plugged into standard AC wall sockets, unlike the more common supercomputers that need special industrial AC power connections.
The company’s primary customers included government and academic institutions that could not afford more expensive, traditional supercomputers. It was thought the company had sold around 70 units within the past year.
Gerbsman Partners, a California-based business consulting firm, claimed on a company blog that they had been retained by SciCortex’s board to solicite bids for the SciCortex assets, primarily its intellectual property.
The company’s public relations firm is Racepoint Group, based in Waltham.
SiCortex has not filed a bankruptcy protection in Massachusetts or Delaware, where it is incorporated.
Tags: Fiber Optics, networking, Telecommunications
add a comment
The term next-generation network is a term that has become more and more prevalent in telecommunications-industry publications and in the general technology and news media. The term actually has a very specific meaning in the telecom industry that I would like to clarify:
The rapidly declining cost of bandwidth, combined with the easy availability of powerful and cheap microprocessor technology, has brought to the fore the economies of scale that packet switching combined with statistical multiplexing afford, provided that a solution can be found to latency and packet loss.
In order to answer these challenges, next-generation networks have at their core two overriding concepts. First of all, a next-generation network supports QoS (Quality of Service) while being a fundamentally high-speed packet-based network which can carry and route a myriad of broadband services, including multimedia, video, data and voice.
Secondly, a next-generation network serves as a common application platform for services and applications that a customer base can access from anywhere across the network as well as outside it.
Tags: Defense Contracts, waltham
add a comment
Tewksbury-based Raytheon Corporation, with 72,000 employees and a reported 2008 net income of $1.7 billion on revenue of $23.2 billion, has landed three new missile deals and a new radar contract from the US military.
The company’s Integrated Defense Systems unit has landed $30 million from the U.S. Army for its surface-launched medium range air-to-air missile (SLAMRAAM).
Last week, landed a $54MM deal with the US Navy to retrofit Super Hornet block II aircraft with APG-79 active electronically scanned array radars.
Earlier this month, Raytheon’s Integrated Defense Systems division (IDS) landed two Patriot missile deals with the US Army. Under the first $115MM contract, Raytheon will upgrade radar components on four Patriot missile systems. The deal was awarded by the Army’s Aviation and Missile Command.
The second Raytheon IDS Patriot Missile contract, valued at $9MM, was awarded by the U.S. Army for Patriot missile maintenance. Under the contract, IDS will perform missile maintenance and facilities in the United States and at overseas locations.
Raytheon stock (stock symbol: RTN) is sliding as Congress debates the defense budget. But the market may be overlooking some basic facts: international contracts make up one fifth of the company’s revenues, the balance sheet of Raytheon is strong, and homeland security is the center of the company’s business.
Raytheon is debt-free and Raytheon generates strong cash flows. Margins, profitability, and consistency of earnings are all improving. The stock, recently at 46.07, is trading below its five-year price / earnings (PE) ratio of 15 and has a 2.5% dividend yield. Raytheon beat first-quarter expectations and raised its 2009 earnings per share (EPS) to between $4.55 and $4.70.
Disclaimer: I do not own stock in Raytheon, but I wish I did. I think it’s a good story and a great Massachusetts-based company. I wish they would hire me, but it doesn’t look like that’s going to happen. I even moved to Waltham, have applied a million times and nothing helps. (laughing)
Directory of Massachusetts Venture Capital Web Sites May 26, 2009Posted by HubTechInsider in Venture Capital.
Tags: boston, Cambridge, Venture Capital, waltham
add a comment
I often receive emails requesting information on the activities and investments of Massachusetts venture capitalists and Massachusetts venture capital firms. As you know, I often write about such topics and large venture capital placements within Massachusetts on these pages. The Masshome site has provided a nice directory of Massachusetts venture capital web sites that are provided by Massachusetts-based VC firms.
You can find the directory of Massachusetts Venture Capital Web Sites here.
Tags: automobiles, electric vehicles, renewable energy
add a comment
In the modular, green, future US automobile industry, there are scores of agile automotive component companies that are only too eager to research and develop the next generation of innovative US-based automobile solutions. Two Massachusetts-based firms are working on creating some essential new technologies to enable the US auto industry to regain its innovation edge.
A123 Systems, based in Watertown, is producing advanced nanophosphate lithium-ion batteries that are intended for use in electric automobiles. Both Chrysler and Norway-based electric carmaker Think are laying plans for using the batteries in their future models.
GEO2 Technologies, based in Woburn, is developing a new type of rigid ceramic diesel fuel filter that exhibits sponge-like properties. The design facilitates increased airflow, which reduces back-pressure and is capable of enhancing a vehicle’s fuel efficiency and boosting power output. The technology may also have applications in providing more pickup to smaller gas engines used in hybrids and other enhanced MPG automobiles.
Demandware gets Round D funding of $15MM and works to answer SaaS ecommerce challenges, under incredible marketplace pressures May 23, 2009Posted by HubTechInsider in Ecommerce, Uncategorized, Venture Capital.
Tags: demandware, ecommerce, SaaS, Venture Capital
Recently I tweeted about Demandware (Woburn, MA) not getting their Round D funding – this was incorrect, and I have retracted this information. The link to the $15 million Form D filing with the SEC is here
[There were rounds of layoffs at Demandware around spring 2009, just prior to this Round D. Round D is generally the last round of financing before an IPO. Many of their employees in Woburn, Massachusetts were laid off. Interesting move. Without this $15 million, it was unclear to many outside observers how strong Demandware’s cash position would have been. Take care to distinguish between “brands” and actual “accounts”. Demandware has lost some high profile accounts (their model is to skim off 3% of sales in addition to setup and hosting fees – despite this, Demandware still has a “burn rate”) that they don’t exactly mention in their press releases. A SaaS (“Software as a Service”) provider such as Demandware is nowadays caught in the crossfire, under incredible pressure from three fronts: powerful and robust, open source ecommerce solutions leverage the cost argument (Magneto), Java-based solutions are starting to get long-in-the-tooth in the face of massively scalable new technologies such as Ruby-on-Rails, and developments in cloud computing leverage the hosting argument. Predictably, Demandware and their PR corps is hard at work dissembling so as to position themselves as the “worry-free package for merchants without in-house technology competence”. Of course, this competence is easily found on the cheap now that it is no longer 2000, and J2EE for ecommerce seems (to many) like a complex, costly, code-bloat dinosaur. Read the commentary below and make up your own mind, Dear Reader. It will be interesting to see if they are able to hold on — I’m rooting for them, however, if I were you and your enterprise, I would still take a long, hard look at Magento, Shopify, or other ecommerce providers. And I would have a lot of tough questions like the ones below ready for the Salespeople and PR types]
So Demandware may even IPO one day – although despite all the optimism about 2009, this year is still looking grim for new issuances. A recent report from Ernst & Young found that the pipeline of companies waiting to go public in the United States dwindled to 80 companies at the end of the second quarter, down from 90 companies three months earlier. There have been seven IPOs so far in 2009.
Demandware is a SaaS (Software as a Service) provider, and with all the controversy surrounding my incorrect, retracted tweet, I have been thinking about some of the reasons enterprises might decide against adopting a SaaS model for their ecommerce operations.
Although it can be tempting for large retail enterprises to partner with a SaaS ecommerce platform vendor to quickly launch an online store for short-term gains, it is important that the CIOs of these retail enterprises develop a defined SaaS strategy and incorporate it into their other long-term application and IT infrastructure plans. One of the most important aspects of this SaaS strategy must be an “exit strategy” for when they may want to bring the online storefront in-house. Hard to blame any company for ditching the revenue-sharing model.
It is vital that when these retail organizations evaluate SaaS ecommerce providers, they evaluate the competing ecommerce platform vendors on whether or not they have a plan and method in place to get the retail enterprise off the SaaS platform – in other words, what is the exit strategy in the long term? Five years out, when the online storefront is growing and becomes a cornerstone of the company’s total revenue stream, how does the retailer migrate the storefront back into the corporate IT environment if the management of the company decides to reintegrate? After the first two years of a SaaS deployment, many enterprises find that cost savings begin to break down. Five years from initial deployment, will it be possible to reestablish control over the online retail presence?
Choosing an ecommerce platform vendor working with hosted technologies that align with the enterprise’s internal IT infrastructure (Microsoft .Net technologies vs. Java? Oracle, MS SQL Server, or MySQL?) could potentially ease migration pain down the road and enable cost savings when and if the decision to internalize critical ecommerce operations is made.
Ecommerce “on demand” software salespeople may try and attract a large retail organization with the promise of utility pricing – it may even sound so good that the large retail enterprise may be tempted into bypassing their normal IT department’s procurement specialists. This is not a good idea. Real utility pricing is almost certainly not as flexible as initially presented and true utility pricing is rarely available. Because many SaaS contracts do not allow for volume reductions, some critics have labeled this licensing model as “shelfware as a service”.
It is critical that large retail organizations negotiate the ability to reduce users. Do not allow the “on demand” vendor to lock the enterprise into negotiations before agreement on this basic principle is reached. If the “on demand” ecommerce platform vendor does not or will not agree to this basic tenent, then refuse over-committal and negotiate escalating discounts for incremental spend in volume bands. Large retail organizations should always remember that SaaS licensing models provide steady and stable revenue streams for ecommerce “on demand” vendors and because of this, the market is becoming increasingly competitive. (Demandware’s competition includes Marketlive ecommerce, among many others) Large retail organizations have immense leverage which can be used to achieve significant licensing concessions and discounts on larger competitive deals. In addition, given the increasing and continual downward pressure on SaaS pricing, single year deals are much preferred but it is essential to secure price caps on renewals.
The vendor’s “on demand” production environment should also be scrutinized carefully. Some questions to get answered in writing may be: How often are changes made to the production environment? What is the breakdown of changes to the production environment by category? What percentage of changes had to be rolled back, or reverted? What sorts of regression tests are performed after a software patch / upgrade / code iteration?
It is vital that a keen eye is focused on the SaaS vendor’s churn and churn management (for instance, Demandware has recently lost two major accounts that you won’t read about in their press releases, including Playboy International and the Vermont Teddy Bear Company) policies. For example, how many customers have they lost in the past 6, 12, 24 months? Is their customer retention improving over time? What percentage is the customer churn compared to their customer base? What is the average duration of customer retention? What is the breakdown, broken out by reasons for customer churn? Beware of salespeople and marketing types who count “brands” as individual customers. A customer is a retail organization, not each of their individual product lines counted separately as “brands”.
Some ecommerce “on demand” vendors also provide for fulfillment services (if they do not, the retail organization will have to continue to provide these services as a normal operating business expense). High volume retail ecommerce by necessity implies that these operational expediencies are being handled with great care and efficiency. Some questions to ask: what is the status of your inventory? What box is located where? What function or customer would be affected by a loss of a certain box? When does your software / support contract expire and what might this expiration impact?
Another primary focus for corporate ecommerce vendor selection decision makers is the emergence of platform-as-a-service providers such as Amazon’s EC2, IBM and Google as well as Microsoft. Large retail organizations can use these platforms to build myriad applications, services and workflows not only to conduct online sales but also to perform advanced predictive analytics, gather fundamental mail order management metrics like future value of a customer and enable billing services to be moved into the cloud – all while providing immense capabilities for increasing uptime and availability during the high volume holiday shopping season.
Some best practice ecommerce SaaS platform selection guidelines could also include data backup and disaster recovery policies, adherence to corporate IT standards regarding accepted technologies and development tools and languages that internal software development resources and departments are familiar with. SLAs (Service Level Agreements) should be examined from ecommerce platform vendors that explore not only DR policies, but also help desk support, performance and uptime, so that buyers of SaaS ecommerce hosting services have a stronger sense of what they are purchasing.
Large retail enterprises have special needs to link internal billing and operational IT systems and external hosted ecommerce systems. Security, billing, fulfillment and compliance requirements differ from industry to industry and over-reliance on a hosted ecommerce service provider should be carefully examined. Retail enterprise decision makers may decide to get back to the fundamental vendor selection process, and take a long hard look at vendor viability in addition to the solution functionality provided by each hosted ecommerce service provider. These decisions should extend past the initial glow of cost savings in the first years of an ecommerce storefront deployment.
In a Feb 20th, 2009 research report, Forrester polled 352 corporate IT decision makers and asked them why they are not interested in SaaS:
Total Cost Concerns 37%
Security Concerns 30%
SaaS Application Mismatch to corporate reqs 25%
Integration Issues 25%
Lack of Customization 21%
Application Performance 20%
Complex Pricing Models 16%
Vendor Lock-In 14%
Other Reasons 13%
Tags: cellular, Telecommunications
add a comment
Frequently, natural disasters such as floods, tornadoes and hurricanes can destroy cellular infrastructure such as cell sites and switches. Fortunately, portable cellular equipment can be deployed to not only enable the continuation of normal cellular service but also provide extra cellular capacity in the case of a special event, such as conventions, major sporting events, and festivals. There are two main ways to provide this extra capacity or infrastructure replacement on demand: COWs and COLTs. A COW is a Cell site On Wheels, and a COLT is a Cell site on a Light Truck. The difference between the two is that a COW consists of cellular equipment on a flat-bed trailer that requires a hook-up to a truck tractor, whereas a COLT is able to be driven to a particular location for immediate deployment.
Both COWs and COLTs have their own batteries or generators, so they can operate independently of the availability of a local electrical supply. Usually they have either one or two cellular towers that are used to communicate back to the local switch. Many of these portable cellular sites also have a small office and emergency supplies. COLTs are used more often in the event of a natural disaster, as their independent operation (no need to hook up to a truck tractor, smaller size and higher mobility) can be a boon to rapidly restoring cellular service for emregency crews and workers.
Extra cellular capacity is often required in the case of a major sporting event like the Olympics or during large political conventions. In such instances, repeaters make spectrum available inside buildings from nearby “donor” cell sites. Repeaters within radio range of the “donor” cell site are located at the convention hall or sporting event building to provide more capacity for cell users. This repeater can then redirect and extend cellular signals from the nearby donor cellular site into the convention hall or building.
What is the difference between Cellular and PCS? May 17, 2009Posted by HubTechInsider in Definitions, Fiber Optics, Mobile Software Applications, Telecommunications, Uncategorized, VUI Voice User Interface, Wireless Applications.
Tags: cellular, networking, pcs, Telecommunications
1 comment so far
Cellular is dual-classified as being inclusive of both analog and digital networks. Cellular networks began with analog infrastructures, and over time migrated this infrastructure to digital. In a cellular network, depending upon your location throughout the world, the operation frequencies are 800MHz to 900MHz band. Cellular infrastructure is generally based on a macrocell architecture. Macrocells involve a coverage area with a diameter of around 8 miles, and because of this large coverage area, cellular operates at high power levels, in a range of .6 to 3 watts.
PCS is a more recent technology, and has been all digital since inception. As with cellular, depending upon where you are located in the world, the frequency band of operation is in the 1.8GHz to 2GHz band. Instead of cellular macrocells, PCS uses two different infrastructures, both microcell and picocell. As these names imply, the coverage areas of these architectures are smaller than macrocells, around 1 mile in diameter. As a result, PCS uses much lower power levels – 100 milliwatts.
So the key differences between PCS and cellular are the frequencies in which they operate, coverage areas of their different cell architectures, and the power levels each uses to transmit signals. They work essentially the same way, use the same types of network elements, and perform the same functions.
Tags: IVR, Mobile Applications, mobile software, mobile web, Telecommunications, VoIP, VUI
Despite the fact that many Automated telephony and IVR vendors advertise that their web-based SaaS offerings can seemingly make the development, testing, deployment and maintenance of an IVR application seem easy and straightforward, this over-confidence in the VUI design abilities of untrained, non-technical business analysts and enterprise services managers is woefully misplaced. This mistaken impression is borne out by the simple fact that just because a software tool may be easy to use (even though all of these SaaS web-based vendors provide VUI tools with horrific interfaces and GUI designs, such as reliance on stone-age Java applets) only cursory thought, if any thinking at all, has been invested into how these untrained resources should use that tool. This can and often does lead to catastrophic results.
I frequently encounter the mistaken prevailing notion that designing a VUI consists of nothing more than taking a GUI and “simplifying it” for use on the telephone. As the thinking goes, we can all talk on the telephone; Not all of us can navigate a complex forms-based web site. But despite this mistaken general impression (perpetuated by IVR and automated telephony vendors and many software development teams within them, as well as their clients), some basic realities persist in shattering these ill-conceived concepts: People can read faster than they can listen with comprehension, speak faster than they can type, and talk much more quickly than they can process the meaning behind spoken words. So even though, based on initial impressions, designing an effective VUI might seem easier than designing a first-rate GUI, the converse is true: designing a great VUI is far more difficult than designing a GUI.
A VUI is inextricably linked with Time
When a user is navigating a GUI, they can read text at any location on the web page or application screen. The user can skip ahead visually to the section they are interested in. With a VUI, the user is a “prisoner” of the VUI design. The attention is captive: they must listen with (or without) patience to each word before they can hear the one that follows it. With this in mind, some best practices for VUI design emerge:
1. Long prompts are Bad: The longer the prompt, the more the user’s patience is being taxed. Introductory or “tutorial” prompts explaining how the system works may be required for an outbound IVR application or alternatively provided for the benefit of novice users, however they should not be forced upon returning visitors or outbound IVR call recipients that have received similar IVR communications in the past.
2. Long VUI menus are Bad: Again to use the GUI as a contrasting example, on a web page you can present many menu options to the user, even hiding numerous options in a drop-down menu. A VUI menu, on the other hand, should never exceed five or six items at the most.
3. Get to the gist of the communication quickly: Forcing your captive “audience” to listen through introductory marketing copy written into an outbound IVR or inbound VRU script will become annoying very quickly to the user. Script your important information into the beginning of your prompts.
4. Allow ‘barge-in’: Expert users who know how to use the system and know what they want to do desire the ability to speed up the automated interaction with the system. Allow them to issue their commands to the system without forcing them to wait for the system to finish talking.
5. Give expert users global hotwords: Global “hotwords”, or application-level shortcuts, allow users to “cut to the chase”, enabling them to cut through menus and enjoy the feeling of enablement that a responsive VUI system can provide.
6. Allow the user to pause the interaction: The GUI has another crucial advantage over the VUI – the ability to stop and start again exactly where you left off after an indeterminate interval. While providing the exact same level of interaction control to the user is impossible in a VUI, if within your VUI design you are asking the user to provide the system with a membership number in a COB (Coordination of Benefits) automated telephony call for a health care provider, or asking them for their account number in an inbound VRU application, or if the system wants the user to write down a confirmation code or other information, then design your VUI so that the call recipient or caller can get their pencil and paper ready, find their membership card, and say “continue” when they are ready.
The One-way Temporal Flow of the User
Of course, the spoken word is not only temporally linear, but also one-way. In the same manner in which time is a “one way street”, so is speech a “one way medium”. When you are listening to a prerecorded voice prompt, you can’t easily hit the nonexistant rewind button on your telephone. A VUI is not like watching a ball game on your DVR or Tivo, either. You can’t easily go back and listen to the prompt again. This is in stark contrast to the GUI world, where the user can jump back-and-forth within the text on the page or screen. Three simple techniques can help to alleviate this conundrum:
1. Always let the user ask to have the system repeat the prompt: Perhaps the most elementary technique to mitigate the one-way temporal flow of the user is to have the system offer to repeat the last prompt. The user must be made aware of the fact that they can have any prompt repeated to them at any time during the IVR interaction.
2. Make Help available to the user: Information or instructions that are crucial to the task completion ability of the call recipient or caller presented at the beginning of the interaction must be made available to the user at any point in the IVR interaction. Offer help to the user not only at the beginning of the call but also at moments where the user seems to have arrived at an impasse in the interaction. The need to offer help to the user is acute at “no input”, “Out of Grammar (OOG)” or “no match” states.
3. Present a summation of the gathered data: In form-filling dialogs or IVR interactions where the caller is being asked to provide information to the system, a marvelous approach to overcome the one-way temporal flow nature of the IVR interaction is to offer the call recipient or caller a summation of the data that has been gathered from them during the course of the IVR interaction so far.
Persistence in a VUI is not visible to the user as in a GUI
Callers or call recipients perhaps show the most frustration when they feel they have lost track of “where they are” in the course of traversing a scripted IVR inbound or outbound interaction. Aggravation mounts as the user becomes increasingly unsure of what to do next, and what the system expects the user to do next. Whereas a web page or application screen typically provides a multitude of visual ques, such as a menu tree, “breadcrumb” navigation path, or something similar, even something as simple and effective as a URL web address window on a browser is unavailable in the VUI world. Some approaches to mitigate these factors emerge to the experienced VUI designer:
1. Auditorily “Announce” the user’s position in the IVR exchange: In the same manner that a properly designed web page or application screen will tell the caller or call recipient where they are in terms of navigating a site or application, so should a well-designed voice interface let the user know their exact position in the IVR interaction. A simple and efective technique for providing the user with such “mental markers” is to use a word or two to announce this position to the user: “Main Menu”…”Here’s the drugs in your prescription refill:”, etc.
2. Audio breadcrumbs: The VUI version of the “breadcrumb navigation” trails that are featured so prominently on web sites in the GUI world can be emulated in the VUI world, where they prove no less useful. Each “voice page” that requires interaction with the user can be associated with a “position page” that announces the user’s position within the dialog tree. “Prescription, Reorder, Address”, as an example, would very nicely indicate to the user that they chose “prescriptions”, then “Reorder”,a nd are now confirming their prescription reorder address on file with the system. A “Go Back” provision or option should be offered to users at these “position page” states.
3. Audio Icons: Auditory icons, or “earcons”, are VUI equivalents of the GUI’s icons. These audio icons can be extremely useful to both the VUI designer as well as the call recipient or caller by either annoucing to the user that a particular action is about to be undertaken or positioning the user within a IVR menu structure or transaction path. “Wait audio”, or sounds played to the user to indicate that the system is busy performing a record lookup or other function can prevent the user from interpreting a system crash or IVR interaction end when faced with an absolute extended silence.
GUIs present one fundamental advantage over VUIs: the user navigating a web page or an application screen has control over the medium, the message, and the interaction itself. Although a poor GUI can make the user feel helplessly confused, a VUI faced with the challenges outlined above has to be near-perfect to prevent the user abandoning the IVR interaction entirely by the simple and universal act of hanging up the telephone. VUI designers should always be aware of the significant differences between designing an effective and useful GUI and VUI. It would be ill-advised to enter into a VUI design task or project of any size while carrying into the endeavor the familiar GUI design assumptions.
Want to know more?
You’re reading Boston’s Hub Tech Insider, a blog stuffed with years of articles about Boston technology startups and venture capital-backed companies,software development, Agile project management, managing software teams, designing web-based business applications, running successful software development projects, ecommerce and telecommunications.
About the author.
I’m Paul Seibert, Editor of Boston’s Hub Tech Insider, a Boston focused technology blog. You can connect with me on LinkedIn, follow me on Twitter, even friend me on Facebook if you’re cool. I own and am trying to sell a dual-zoned, residential & commercial Office Building in Natick, MA. I have a background in entrepreneurship, ecommerce, telecommunications andsoftware development, I’m the Director, Technical Projects at eSpendWise, I’m a serial entrepreneur and the co-founder of Tshirtnow.net.
The SONET and SDH Signal Hierarchy: How many T-1s are in an OC-1, OC-3, OC-12, or OC-48? May 10, 2009Posted by HubTechInsider in Definitions, Fiber Optics, Telecommunications, Uncategorized.
Tags: Fiber Optics, networking, Telecommunications
add a comment
I have found that there exists out there in the wide world a touch of confusion when it comes to recognizing the different signal levels and transmission speeds associated with what is referred to in the telecom industry as digital hierarchies, the two most common of which are, in North America, the PDH and SDH, or SONET, hierarchies.
Throughout my work as a telecommunications enthusiast, a pastime of discovery which has kept me occupied ever since my teen years, and on through many of my professional pursuits, I have always served as a point of reference for others in regards to the various telecommunications signal levels as well as the transmission speeds that these levels in the hierarchies represent. The following is my rough attempt to put this information into one place that can serve as a reference for me and others:
SONET was developed to aggregate, or multiplex, circuit switched traffic such as T-1, (E-1 in Europe) T-3, and slower rates of data traffic from multiple sources on fiber-optic networks. SONET transports traffic at high speeds called OC (Optical Carrier). The international version of SONET is called the synchronous digital hierarchy (SDH). SDH carries traffic at synchronous transport mode speeds. Equipment interfaces make SONET and SDH speeds compatible with each other, so the same SONET switching equipment can be used for both OC and SDH speeds.
OC-1 operates at 52 Mbps and is equivalent to 28 DS-1s (same as a T-1) or 1 DS-3 (same as a T-3). OC-1 is generally used as customer access lines. Early-adopter types of customers such as universities, airports, financial institutions, large government agencies, and ISPs – use OC-1.
OC-3 operates at 155 Mbps and is equivalent to 84 DS-1s (same as a T-1) or 3 DS-3s (same as a T-3). OC-3 speeds are required by end users such as companies in the aerospace industry and high-tier ISPs.
OC-12 operates at 622 Mbps and is equivalent to 336 DS-1s (same as T-1) or 12 DS-3s. This is another capacity towards which high-tier ISPs are moving. It was originally deployed for the metropolitan area fiber rings built out across cities worldwide, although those rings are now moving to OC-48.
OC-48 operates at 2,488 Mbps and is equivalent to 1,344 DS-1s (same as a T-1) or 48 DS-3s (same as a T-3). This capacity has been deployed for backbone, or core, networks. Today the metropolitan area rings are moving from OC-48 to OC-192.
OC-192 operates at 9,953 Mbps and is equivalent to 5,376 DS-1s (same as a T-1) or 192 DS-3s (same as a T-3). OC-192 is in use for backbone networks.
OC-768 operates at 39,812 Mbps and is equivalent to 21,504 DS-1s (same as a T-1) or 768 DS-3s (same as a T-3). Use of OC-768 is very rare outside of testing or research networks due to the great expense of this transmission speed level.
At times, you may see OC levels such as OC-1c, OC-3c, OC-12c, etc. This is called concatenation, and it puts streams of data into one fat, or high-bandwidth, contiguous stream. For example, OC-1 speeds of 52 Mbps may be used to carry broadcast video. In this case, OC-1c, or concatenated OC-1, carries OC-1 streams back-to-back. These streams travel contiguously through the network as long as capacity is available. Most applications for concatenation are high-speed data and broadcast-quality video.
As far as the DS, or Digital Signal Levels, of the older PDH, or Plesiochronous Digital Hierarchy (plesiochronous means “minute variations in timing”), they follow what is known as the T-carrier signal levels. Technically, the DS-x and CEPT-x terminology (DS-1, DS-3, CEPT-1, CEPT-3, and so on) indicates a specific signal level (and thus usable bandwidth), as well as the electrical interface specification. T-x and E-x terminology (T-1, T-3, E-1, E-3, and so on) indicates the type of carrier – a specific implementation of a DS-x/CEPT-x. More often than not these days, however, the terms DS-x and T-x are used interchangeably. So some people might use the term DS-1 and T-1 to refer to the same thing – a digital transport that can carry 1.544 Mpbs over a total of 24 voice channels. In Europe, the same is true: E-1 is the same as CEPT-1, and so forth.
A DS-0 (T-0) has a bit rate of 64 Kbps and carries 1 voice-grade channel.
A DS-1 is equivalent to a T-1 and has a bit rate of 1.544 Mpbs and carries 24 voice channels.
A DS-2 has a bit rate of 6.312 Mpbs and carries 96 voice channels, equivalent to 4 T-1s. This is also sometimes referred to as a T-2, or T2.
A DS-3 has a bit rate of 44.736 Mpbs and carries 672 voice channels, equivalent to 28 T-1s. This is also sometimes referred to as a T-3, or T3.
A DS-4 has a bit rate of 274.176 Mpbs and carries 4,032 voice channels, equivalent to 168 T-1s. This is also sometimes referred to as a T-4, or T4.
Want to know more?
You’re reading Boston’s Hub Tech Insider, a blog stuffed with years of articles about Boston technology startups and venture capital-backed companies, software development, Agile project management, managing software teams, designing web-based business applications, running successful software development projects, ecommerce and telecommunications.
About the author.
I’m Paul Seibert, Editor of Boston’s Hub Tech Insider, a Boston focused technology blog. You can connect with me on LinkedIn, follow me on Twitter, even friend me on Facebook if you’re cool. I own and am trying to sell a dual-zoned, residential & commercial Office Building in Natick, MA. I have a background in entrepreneurship, ecommerce, telecommunications and software development, I’m the Senior Technical Project Manager at eSpendWise, I’m a serial entrepreneur and the co-founder of Tshirtnow.net.
More Articles From Boston’s Hub Tech Insider:
- Twelve Tips For Agile Project Planning and Estimating
- Eight ways to tell if your Project Team is on the Way Up, or on the Way Down
- The Twenty Laws of Testing Computer Software
- Why Designing for a VUI is harder than designing for a GUI
- The Hub Tech Insider Glossary of Mobile Web Terminology
- The Hub Tech Insider Glossary of Stock Options Terminology
- How many Stock Options should executives at a startup be granted?
- Agile Development In Practice
- What is ‘Management By Walking Around’?
- Boston Area Video Game Companies
- Demandware eCommerce
- How to expand your professional network on LinkedIn
- How to use LinkedIn in your job search
- Twitter and network effects
- How much bandwidth does a smartphone use? How much bandwidth does an Apple iPad use? How much bandwidth does an Apple iPhone use?
- What is Scrum?
- What is a “Use Case”?
- What is a “User Story”?
- What is Indirect Spend?
- What is EDIINT? What is AS2, AS1, AS3 and AS4?
Tags: Fiber Optics, networking, Telecommunications, VoIP
add a comment
For a good part of the 90’s, conventional wisdom in the telecommunications industry held that asynchronous transfer mode (ATM) and Internet Protocol (IP) were competing technologies. IP, the prevailing notion held, was a “best effort” service because IP-based networks indiscriminately discarded packets if there was congestion. There was no standardized protocol to identify and prioritize video and voice. The industry at that time maintained that best effort protocols would not be recognized by carriers as acceptable for voice traffic. Because of this, ATM’s ability to create virtual connections and to prioritize voice and video so that packets would never be dropped and quality of service standards were met gave ATM a vital advantage.
ATM also had speed advantages, capable of speeds of 155 and 622 megabits per second. Ethernet LANS at this time were limited to 10 megabits per second, and IP used between networks was also slower than ATM. For these above reasons, when carriers wanted to improve their networks, they decided on ATM equipment. I personally was involved in Fleet Bank’s multi-million dollar loan to LDDS Worldcom (now MCI) in the late nineties for ATM gear for their UUNET data network subsidiary.
However, despite all of its inherent advantages, ATM gear was costly and complex to install. There was a slight push around this time for ATM to be used in LANs, especially in campus backbone networks and NSF research nets, but ATM was far too expensive to deploy on the desktop. So ATM was relegated to use in large corporate backbone networks and carrier traffic-bearing data networking.
So, as you can imagine, mainly due to the speed and quality-of-service advantages, established telecom vendors and most new softswitch vendors initally at least based their next-generation voice switch architecture on ATM rather than IP. Meanwhile, improvements in routers and faster speeds on IP networks were making IP networks much more suitable for voice. Also at this time, Cisco’s TAG protocol, the forerunner of today’s MPLS, was being developed and was maturing. The MPLS protocol marked packets so that voice and video could be prioritized. This capability let IP packet flows be handled similarly to ATM virtual connections, which treat various types of traffic differently. Concurently, IP speeds improved from 10 megabits per second to 100 megabits per second speeds and, eventually, gigabit speeds.
With these notable improvements in speed and service qualities, along with the fact that corporate endpoints were already equipped to deal with IP traffic, the founders of Sonus Networks (Westford, MA), in 1997, choose to base their next-generation, softswitch-based voive infrastructure on IP. In this manner, Sonus was granted a head start over competitors who initially developed platforms based on ATM, losing time and previously invested development money when they switched over to IP – too late.
In related news, Sonus Networks of Westford, MA recently (11 March 09) announced it is “restructuring” again, cutting another 60 employees to complete its third round of cuts in three months. The company said this cut will equal out to about 6% of their workforce. The total job cuts within the three months has added up to 160 jobs lost at the networking equipment vendor. Sonus has a baseline resource level of approximately 1,000 people.
BT (British Telecom) has also recently (3 May 09) announced that it is cutting back on deployments of equipment and resources for its 21CN Next Generation Network (NGN) project.
Jefferies & Company analyst George Notter points out in a recent research note that Sonus was slated to in late 2007 to provide an Access Gateway Controller Function (AGCF) to enable communications between core IP and PSTN access networks. However, Sonus may have to take a revenue hit now, as BT discovered that its NGN network architecture is too costly. They have halted the NGN Network cutover project.
Update: Sonus Networks announces 2009 Q1 results
Tags: Fiber Optics, networking, Telecommunications
add a comment
The stated figures you may frequently encounter for bandwidth measurements, expressed in bits per second, can be hard to interpret and grasp using real-world examples that can be easily envisioned. As a for instance, fiber optic transmission facilities and fiber cables can today very easily enable data transmission speeds of up to 10Gps. 10Gps transmission speeds mean 10 billion bits per second can be sent down the fiber – and this bandwidth can accommodate, as an example, sending all 32 volumes of the Encyclopedia Britannica in a mere tenth of a second.
But there is even more at work here; The real impact of fiber is not just on the ever advancing Bps rates that can be facilitated, but also in the capabilities fiber affords to us in terms of reducing the number of conversions from analog to digital that at present are required to traverse the legacy telecommunications infrastructure as the data moves from point to point across the globe.
A tectonic shift is occurring; The transition from the electronic era to the optical, or photonic era. An entirely new generation of switches and devices that at their heart are optical.
Consider the hypothetical example of a fax transmission from a location in the United States to a location in India. Beginning as marks on a piece of paper (the most analog of communications mediums), the fax machine in the US digitizes the paper’s marks (the first conversion). The modem in the fax machine then converts these digital bits into analog sounds that can be sent over the telephone. The Class 5 switch at the local exchange in the US converts these sounds back to digital (the third conversion). The Class 4 switch in the US then converts these digital bits back into analog for the trip overseas on the telephone network to India. The receiving Class 4 switch in India then converts these analog sounds back into digital bits. The Class 5 switch in India, close to the destination fax machine at the local exchange, then converts back into analog for the transmission to the receiving fax machine. The modem in the receiving fax machine then reconverts these analog sounds back into digital bits, which are assembled, checked for accuracy and printed on a blank sheet of paper, rendering a final analog page of marks exactly in the form of the marks on the original page that went into the US fax machine. That is a total of eight conversions! Avoiding this high number of conversions is possible only in an optical network; And when we have more optical equipment in the chain of network nodes, then we will be capable of utilizing fiber to an even greater degree to achieve previously unimagined transmission speeds.
The Swine Flu Debacle of 1976 May 5, 2009Posted by HubTechInsider in Biotech.
Tags: swine flu
1 comment so far
When swine flu struck a U.S. Army base in New Jersey in 1976, President Gerald Ford ordered a $135 million nationwide vaccination effort. It was a disaster. The epidemic never happened, and more than 30 people died from the vaccine. Time magazine recounts the tale.
How telephone numbers are assigned May 3, 2009Posted by HubTechInsider in Definitions, Telecommunications.
Tags: Telecommunications, VoIP
add a comment
The North American Numbering Plan Administration assigns telephone numbers to state-certified wireline carriers in each state. Wireless carriers also receive numbers from the North American Number Plan Administration. However, they don’t need to register on a state-by-state basis because the FCC, not individual states, licenses them to offer service. Carriers such as Vonage, Broadview Networks, and SBC for their IP services are required to obtain telephone numbers from local exchange carriers (LECs) in each state. The LECs can be either the incumbent or a competitor to the incumbent. The reason for this requirement is that VoIP is not defined at this time as a telecommunications service. Thus, VoIP carriers or the department and subsidiaries within carriers that offer VoIP must enter into agreements with a licensed carrier to obtain local telephone numbers in each state in which they wish to offer Voice over IP service. SBC IP has asked the FCC for a waiver of the requirement to obtain numbers from other carriers. In their own territory, they receive num,bers from their parent, SBC. However, when they offer VoIP outside of their home territory, they have to enter agreements with other LECs. Prior to the announced merger with SBC, AT&T objected to SBC IP’s request for a waiver, saying this would be unfair to other VoIP providers.
The North American Numbering Plan Administration assigns numbers in blocks of 1,000. This is called the number pooling system of allotting numbers because pools of 1,000 unused numbers are created. Prior to the year 2000, numbers were assigned to carriers in blocks of 10,000. This resulted in wasted numbers because many smaller carriers who did not use up all of their numbers could not share them with other carriers. To further conserve their numbers, in 2000, the FCC mandated that phone companies must first use up 60% of their assigned phone numbers before being given new ones. As of June 30, 2004, that percentage increased to 75%.
Boston Biotech goes on life support May 2, 2009Posted by HubTechInsider in Biotech, Venture Capital.
Tags: Biotech, boston, Cambridge, pharmaceuticals, science, Venture Capital, waltham
add a comment
For Boston’s high-risk, cash-intensive biotech industry, it’s now-or-never time. Biotech companies have always been notoriously risky. They tend to burn through cash – to develop one drug can cost as much as $1 billion – and operate on a “pre-revenue” basis for years. Now the credit crunch is hitting the lab-coat crowd harder than most. For private outfits venture money is drying up on one end, and on the other there’s no easy exit in an IPO; on the publicly traded side, small and midsize listed companies are struggling to find enough funding to stay afloat. Life sciences research firm Burrill & Company says that one-third of publicly traded biotechs have less than six months’ worth of cash left – and he predicts that as many as 100 might go under or be forced to merge this year (10 have filed for bankruptcy since November). While that’s bad news for some companies, other companies may benefit from the available assets and skilled staff of these Boston biotech firms should they prove unable to bring their promising new drugs to market.
Oscient Pharmaceuticals, a Waltham, MA company working on drugs for high blood cholesterol and chronic bronchitis, and financed by Orbimed Advisors and Paul Capital Partners, has asked Broadpoint Capital to explore a possible sale after auditors warned the 213-employee company of an impending cash shortage.
Altus Pharmaceuticals, a Cambridgeport, MA company working on drugs for gastrointestinal and metabolic disorders, and funded by U.S. Venture Partners and Warburg Pincus, has pulled the plug on its drug for cystic fibrosis and cut more than 100 jobs. One of Altus’ VC board members resigned abruptly in April.
Measuring Voice Quality in a VoIP environment May 1, 2009Posted by HubTechInsider in Telecommunications, Uncategorized.
Tags: networking, Telecommunications, VoIP
add a comment
One of the consequences of installing Voice over IP systems is that the “voice” sides of information technology departments are learning the lingo and technology of measuring voice quality on data networks. In addition, staffs that manage data networks are becoming aware of the criticality of voice. They are developing a cognizance of the impact on voice services of congestion when they add new applications. They also note lost voice service when they take down the network for maintenance or new installations.
Staff use network management tools that entail quality of service assesments to monitor the following factors in voice quality:
* Packet loss refers to the network dropping packets when there is congestion. Packet loss results in uneven voice quality. Voice conversations “break up” when packet loss is too high.
* Latency refers to delays when voice packets transverse the network. Latency is measured in milliseconds. It results in long pauses within conversations and clipped words.
* Jitter is uneven latency and packet loss resulting in noisy calls that contain pops and clicks or crackling sounds.
* Echo, hearing your voice repeated, is often caused when voice is translated from a circuit switched format to the IP format. This is usually corrected by special echo-canceling devices.