Tuesday, June 7, 2011

Privacy Policy

Privacy Policy for http://internetregistrationservices.blogspot.com/

If you require any more information or have any questions about our privacy policy, please feel free to contact us by email at ibnuanshari02@gmail.com.

At http://internetregistrationservices.blogspot.com/, the privacy of our visitors is of extreme importance to us. This privacy policy document outlines the types of personal information is received and collected by http://internetregistrationservices.blogspot.com/ and how it is used.

Log Files
Like many other Web sites, http://internetregistrationservices.blogspot.com/ makes use of log files. The information inside the log files includes internet protocol ( IP ) addresses, type of browser, Internet Service Provider ( ISP ), date/time stamp, referring/exit pages, and number of clicks to analyze trends, administer the site, track user’s movement around the site, and gather demographic information. IP addresses, and other such information are not linked to any information that is personally identifiable.

Cookies and Web Beacons
http://internetregistrationservices.blogspot.com/ does use cookies to store information about visitors preferences, record user-specific information on which pages the user access or visit, customize Web page content based on visitors browser type or other information that the visitor sends via their browser.

DoubleClick DART Cookie
.:: Google, as a third party vendor, uses cookies to serve ads on http://internetregistrationservices.blogspot.com/.
.:: Google's use of the DART cookie enables it to serve ads to users based on their visit to http://internetregistrationservices.blogspot.com/ and other sites on the Internet.
.:: Users may opt out of the use of the DART cookie by visiting the Google ad and content network privacy policy at the following URL - http://www.google.com/privacy_ads.html

Some of our advertising partners may use cookies and web beacons on our site. Our advertising partners include ....
Google Adsense


These third-party ad servers or ad networks use technology to the advertisements and links that appear on http://internetregistrationservices.blogspot.com/ send directly to your browsers. They automatically receive your IP address when this occurs. Other technologies ( such as cookies, JavaScript, or Web Beacons ) may also be used by the third-party ad networks to measure the effectiveness of their advertisements and / or to personalize the advertising content that you see.

http://internetregistrationservices.blogspot.com/ has no access to or control over these cookies that are used by third-party advertisers.

You should consult the respective privacy policies of these third-party ad servers for more detailed information on their practices as well as for instructions about how to opt-out of certain practices. http://internetregistrationservices.blogspot.com/'s privacy policy does not apply to, and we cannot control the activities of, such other advertisers or web sites.

If you wish to disable cookies, you may do so through your individual browser options. More detailed information about cookie management with specific web browsers can be found at the browsers' respective websites.

Tuesday, July 27, 2010

Multiprotocol Label Switching (MPLS)


a mechanism in high-performance telecommunications networks which directs and carries data from one network node to the next. MPLS makes it easy to create "virtual links" between distant nodes. It can encapsulate packets of various network protocols.
MPLS is a highly scalable, protocol agnostic, data-carrying mechanism. In an MPLS network, data packets are assigned labels. Packet-forwarding decisions are made solely on the contents of this label, without the need to examine the packet itself. This allows one to create end-to-end circuits across any type of transport medium, using any protocol. The primary benefit is to eliminate dependence on a particular Data Link Layer technology, such as ATM, frame relay, SONET or Ethernet, and eliminate the need for multiple Layer 2 networks to satisfy different types of traffic. MPLS belongs to the family of packet-switched networks.
MPLS operates at an OSI Model layer that is generally considered to lie between traditional definitions of Layer 2 (Data Link Layer) and Layer 3 (Network Layer), and thus is often referred to as a "Layer 2.5" protocol. It was designed to provide a unified data-carrying service for both circuit-based clients and packet-switching clients which provide a datagram service model. It can be used to carry many different kinds of traffic, including IP packets, as well as native ATM, SONET, and Ethernet frames.
A number of different technologies were previously deployed with essentially identical goals, such as frame relay and ATM. MPLS technologies have evolved with the strengths and weaknesses of ATM in mind. Many network engineers agree that ATM should be replaced with a protocol that requires less overhead, while providing connection-oriented services for variable-length frames. MPLS is currently replacing some of these technologies in the marketplace. It is highly possible that MPLS will completely replace these technologies in the future, thus aligning these technologies with current and future technology needs.
In particular, MPLS dispenses with the cell-switching and signaling-protocol baggage of ATM. MPLS recognizes that small ATM cells are not needed in the core of modern networks, since modern optical networks (as of 2008) are so fast (at 40 Gbit/s and beyond) that even full-length 1500 byte packets do not incur significant real-time queuing delays (the need to reduce such delays — e.g., to support voice traffic — was the motivation for the cell nature of ATM).
At the same time, MPLS attempts to preserve the traffic engineering and out-of-band control that made frame relay and ATM attractive for deploying large-scale networks.
While the traffic management benefits of migrating to MPLS are quite valuable (better reliability, increased performance), there is a significant loss of visibility and access into the MPLS cloud for IT departments.

Friday, July 23, 2010

Physical Layout


A typical server rack, commonly seen in colocation.
A data center can occupy one room of a building, one or more floors, or an entire building. Most of the equipment is often in the form of servers mounted in 19 inch rack cabinets, which are usually placed in single rows forming corridors between them. This allows people access to the front and rear of each cabinet. Servers differ greatly in size from 1U servers to large freestanding storage silos which occupy many tiles on the floor. Some equipment such as mainframe computers and storage devices are often as big as the racks themselves, and are placed alongside them. Very large data centers may use shipping containers packed with 1,000 or more servers each[6]; when repairs or upgrades are needed, whole containers are replaced (rather than repairing individual servers).[7]
Local building codes may govern the minimum ceiling heights.


A bank of batteries in a large data center, used to provide power until diesel generators can start.
The physical environment of a data center is rigorously controlled:
• Air conditioning is used to control the temperature and humidity in the data center. ASHRAE's "Thermal Guidelines for Data Processing Environments"[8] recommends a temperature range of 16–24 °C (61–75 °F) and humidity range of 40–55% with a maximum dew point of 15°C as optimal for data center conditions.[9] The electrical power used heats the air in the data center. Unless the heat is removed, the ambient temperature will rise, resulting in electronic equipment malfunction. By controlling the air temperature, the server components at the board level are kept within the manufacturer's specified temperature/humidity range. Air conditioning systems help control humidity by cooling the return space air below the dew point. Too much humidity, and water may begin to condense on internal components. In case of a dry atmosphere, ancillary humidification systems may add water vapor if the humidity is too low, which can result in static electricity discharge problems which may damage components. Subterranean data centers may keep computer equipment cool while expending less energy than conventional designs.
• Modern data centers try to use economizer cooling, where they use outside air to keep the data center cool. Washington state now has a few data centers that cool all of the servers using outside air 11 months out of the year. They do not use chillers/air conditioners, which creates potential energy savings in the millions.[10].
• Backup power consists of one or more uninterruptible power supplies and/or diesel generators.
• To prevent single points of failure, all elements of the electrical systems, including backup system, are typically fully duplicated, and critical servers are connected to both the "A-side" and "B-side" power feeds. This arrangement is often made to achieve N+1 Redundancy in the systems. Static switches are sometimes used to ensure instantaneous switchover from one supply to the other in the event of a power failure.
• Data centers typically have raised flooring made up of 60 cm (2 ft) removable square tiles. The trend is towards 80–100 cm (31–39 in) void to cater for better and uniform air distribution. These provide a plenum for air to circulate below the floor, as part of the air conditioning system, as well as providing space for power cabling. Data cabling is typically routed through overhead cable trays in modern data centers. But some are still recommending under raised floor cabling for security reasons and to consider the addition of cooling systems above the racks in case this enhancement is necessary. Smaller/less expensive data centers without raised flooring may use anti-static tiles for a flooring surface. Computer cabinets are often organized into a hot aisle arrangement to maximize airflow efficiency.
• Data centers feature fire protection systems, including passive and active design elements, as well as implementation of fire prevention programs in operations. Smoke detectors are usually installed to provide early warning of a developing fire by detecting particles generated by smoldering components prior to the development of flame. This allows investigation, interruption of power, and manual fire suppression using hand held fire extinguishers before the fire grows to a large size. A fire sprinkler system is often provided to control a full scale fire if it develops. Fire sprinklers require 18 in (46 cm) of clearance (free of cable trays, etc.) below the sprinklers. Clean agent fire suppression gaseous systems are sometimes installed to suppress a fire earlier than the fire sprinkler system. Passive fire protection elements include the installation of fire walls around the data center, so a fire can be restricted to a portion of the facility for a limited time in the event of the failure of the active fire protection systems, or if they are not installed. For critical facilities these firewalls are often insufficient to protect heat-sensitive electronic equipment, however, because conventional firewall construction is only rated for flame penetration time, not heat penetration. There are also deficiencies in the protection of vulnerable entry points into the server room, such as cable penetrations, coolant line penetrations and air ducts. For mission critical data centers fireproof vaults with a Class 125 rating are necessary to meet NFPA 75[11] standards.
• Physical security also plays a large role with data centers. Physical access to the site is usually restricted to selected personnel, with controls including bollards and mantraps.[12] Video camera surveillance and permanent security guards are almost always present if the data center is large or contains sensitive information on any of the systems within. The use of finger print recognition man traps is starting to be commonplace.

source : http://en.wikipedia.org/wiki/Data_center

Friday, July 16, 2010

Data Center Classification

The TIA-942:Data Center Standards Overview describes the requirements for the data center infrastructure. The simplest is a Tier 1 data center, which is basically a server room, following basic guidelines for the installation of computer systems. The most stringent level is a Tier 4 data center, which is designed to host mission critical computer systems, with fully redundant subsystems and compartmentalized security zones controlled by biometric access controls methods. Another consideration is the placement of the data center in a subterranean context, for data security as well as environmental considerations such as cooling requirements.[2]
The four levels are defined, and copyrighted, by the Uptime Institute, a Santa Fe, New Mexico-based think tank and professional services organization. The levels describe the availability of data from the hardware at a location. The higher the tier, the greater the accessibility. The levels are:

Tier Level Requirements
1
• Single non-redundant distribution path serving the IT equipments
• Non-redundant capacity components
• Basic site infrastructure guaranteeing 99.671% availability
2
• Fulfils all Tier 1 requirements
• Redundant site infrastructure capacity components guaranteeing 99.741% availability
3
• Fulfils all Tier 1 & Tier 2 requirements
• Multiple independent distribution paths serving the IT equipments
• All IT equipments must be dual-powered and fully compatible with the topology of a site's architecture
• Concurrently maintainable site infrastructure guaranteeing 99.982% availability
4
• Fulfils all Tier 1, Tier 2 and Tier 3 requirements
• All cooling equipment is independently dual-powered, including chillers and Heating, Ventilating and Air Conditioning (HVAC) systems
• Fault tolerant site infrastructure with electrical power storage and distribution facilities guaranteeing 99.995% availability

source : http://en.wikipedia.org/wiki/Data_center

Thursday, July 15, 2010

Requirements for modern data centers

Racks of telecommunications equipment in part of a data center.
IT operations are a crucial aspect of most organizational operations. One of the main concerns is business continuity; companies rely on their information systems to run their operations. If a system becomes unavailable, company operations may be impaired or stopped completely. It is necessary to provide a reliable infrastructure for IT operations, in order to minimize any chance of disruption. Information security is also a concern, and for this reason a data center has to offer a secure environment which minimizes the chances of a security breach. A data center must therefore keep high standards for assuring the integrity and functionality of its hosted computer environment. This is accomplished through redundancy of both fiber optic cables and power, which includes emergency backup power generation.

source : http://en.wikipedia.org/wiki/Data_center

Thursday, July 8, 2010

Data Center

Definition :

A data center (or datacentre) is a facility used to house computer systems and associated components, such as telecommunications and storage systems. It generally includes redundant or backup power supplies, redundant data communications connections, environmental controls (e.g., air conditioning, fire suppression) and security devices.

Data centers have their roots in the huge computer rooms of the early ages of the computing industry. Early computer systems were complex to operate and maintain, and required a special environment in which to operate. Many cables were necessary to connect all the components, and methods to accommodate and organize these were devised, such as standard racks to mount equipment, elevated floors, and cable trays (installed overhead or under the elevated floor). Also, old computers required a great deal of power, and had to be cooled to avoid overheating. Security was important – computers were expensive, and were often used for military purposes. Basic design guidelines for controlling access to the computer room were therefore devised.

During the boom of the microcomputer industry, and especially during the 1980s, computers started to be deployed everywhere, in many cases with little or no care about operating requirements. However, as information technology (IT) operations started to grow in complexity, companies grew aware of the need to control IT resources. With the advent of client-server computing, during the 1990s, microcomputers (now called "servers") started to find their places in the old computer rooms. The availability of inexpensive networking equipment, coupled with new standards for network cabling, made it possible to use a hierarchical design that put the servers in a specific room inside the company. The use of the term "data center," as applied to specially designed computer rooms, started to gain popular recognition about this time.

The boom of data centers came during the dot-com bubble. Companies needed fast Internet connectivity and nonstop operation to deploy systems and establish a presence on the Internet. Installing such equipment was not viable for many smaller companies. Many companies started building very large facilities, called Internet data centers (IDCs), which provide businesses with a range of solutions for systems deployment and operation. New technologies and practices were designed to handle the scale and the operational requirements of such large-scale operations. These practices eventually migrated toward the private data centers, and were adopted largely because of their practical results.

As of 2007[update], data center design, construction, and operation is a well-known discipline. Standard documents from accredited professional groups, such as the Telecommunications Industry Association, specify the requirements for data center design. Well-known operational metrics for data center availability can be used to evaluate the business impact of a disruption. There is still a lot of development being done in operation practice, and also in environmentally-friendly data center design. Data centers are typically very expensive to build and maintain.

source : http://en.wikipedia.org/wiki/Data_center

Tuesday, July 6, 2010

Trunking

Etymology

How the term came to apply to communications is unclear, but its previous use in railway track terminology (e.g., India's Grand Trunk Road, Canada's Grand Trunk Railway) was based on the natural model of a tree trunk and its branches. It is likely that the same analogy drove the communications usage.

An alternative explanation is that, from an early stage in the development of telephony, the need was found for thick cables (up to around 10 cm diameter) containing many pairs of wires. These were usually covered in lead. Thus, both in colour and size they resembled an elephant's trunk.[citation needed] This leaves open the question of what term was applied to connections among exchanges during the years when only open wire was used.

Radio communications

In two-way radio communications, trunking refers to the ability of transmissions to be served by free channels whose availability is determined by algorithmic protocols. In conventional (i.e., not trunked) radio, users of a single service share one or more exclusive radio channels and must wait their turn to use them, analogous to the operation of a group of cashiers in a grocery store, where each cashier serves his/her own line of customers. The cashier represents each radio channel, and each customer represents a radio user transmitting on their radio.

Trunked radio systems (TRS) pool all of the cashiers (channels) into one group and use a store manager (site controller) that assigns incoming shoppers to free cashiers as determined by the store's policies (TRS protocols).

In a TRS, individual transmissions in any conversation may take place on several different channels, much as if a family of shoppers checked out all at once, they may be assigned different cashiers by the traffic manager. Similarly, if a single shopper checks out more than once, they may be assigned a different cashier each time.

Trunked radio systems provide greater efficiency at the cost of greater management overhead. The store manager's orders must be conveyed to all the shoppers. This is done by assigning one or more radio channels as the "control channel". The control channel transmits data from the site controller that runs the TRS, and is continuously monitored by all of the field radios in the system so that they know how to follow the various conversations between members of their talkgroups (families) and other talkgroups as they hop from radio channel to radio channel.

TRS's have grown massively in their complexity since their introduction, and now include multi-site systems that can cover entire states or groups of states. This is similar to the idea of a chain of grocery stores. The shopper generally goes to the nearest grocery store, but if there are complications or congestion, the shopper may opt to go to a neighboring store. Each store in the chain can talk to each other and pass messages between shoppers at different stores if necessary, and they provide backup to each other: if a store has to be closed for repair, then other stores pick up the customers.

TRS's have greater risks to overcome than conventional radio systems in that a loss of the store manager (site controller) would cause the system's traffic to no longer be managed. In this case, most of the time the TRS automatically reverts to conventional operation. In spite of these risks, TRS's usually maintain reasonable uptime.

TRS's are more difficult to monitor via radio scanner than conventional systems; however, larger manufacturers of radio scanners have introduced models that, with a little extra programming, are able to follow TRS's quite efficiently.

Telecommunications

Trunk line

A trunk line is a circuit connecting telephone switchboards (or other switching equipment), as distinguished from local loop circuit which extends from telephone exchange switching equipment to individual telephones or information origination/termination equipment.[1][2]

When dealing with a private branch exchange (PBX), trunk lines are the phone lines coming into the PBX from the telephone provider [3]. This differentiates these incoming lines from extension lines that connect the PBX to (usually) individual phone sets. Trunking saves cost, because there are usually fewer trunk lines than extension lines, since it is unusual in most offices to have all extension lines in use for external calls at once. Trunk lines transmit voice and data in formats such as analog, T1, E1, ISDN or PRI. The dial tone lines for outgoing calls are called DDCO (Direct Dial Central Office) trunks.

Trunk call

In the UK and the Commonwealth countries, a trunk call was a long distance one as opposed to a local call. See Subscriber trunk dialling and Trunk vs Toll.

Telephone exchange

Trunking also refers to the connection of switches and circuits within a telephone exchange.[4] Trunking is closely related to the concept of grading. Trunking allows a group of inlet switches at the same time. Thus the service provider can provide a lesser number of circuits than might otherwise be required, allowing many users to "share" a smaller number of connections and achieve capacity savings.[5][6]

Computer networks

Link aggregation

In computer networking, trunking is a slang term referring to the use of multiple network cables or ports in parallel to increase the link speed beyond the limits of any one single cable or port. This is called link aggregation. These aggregated links may be used to interconnect switches.

VLANs

In the context of VLANs, Avaya and Cisco uses the term "trunking" to mean "VLAN multiplexing" - carrying multiple VLANs through a single network link through the use of a "trunking protocol". To allow for multiple VLANs on one link, frames from individual VLANs must be identified. The most common and preferred method, IEEE 802.1Q adds a tag to the Ethernet frame header, labeling it as belonging to a certain VLAN. Since 802.1Q is an open standard, it is the only option in an environment with multiple-vendor equipment.

Cisco also has a proprietary trunking protocol called Inter-Switch Link which encapsulates the Ethernet frame with its own container, which labels the frame as belonging to a specific VLAN.

source : http://en.wikipedia.org/wiki/Trunking