JS

Saturday, April 25, 2009

Open Source Software (OSS)

OSS literally means software whose source is open. Stretching this
definition, to be termed OSS, the software should also not restrict
the user from freely using it, modifying it and distributing it. The
most significant differences between OSS and Proprietary Software
(PS) are immediately apparent. PS is usually distributed as a binary
without the source code. So it is almost impossible to dissect the
program to learn about its structure and logic, and make correc-
tions or modifications if necessary. PSS is normally sold with a lot
of conditions that restrict the usage and distribution of the program.

There is a general opinion that all OSS is also free of cost. That isn’t
the case. Proponents of OSS do not restrict the sale of the OSS.
People are free to package and sell OSS at a price they consider fit.

But given the fact that the OSS is freely available to the next person
as well, it is easy to conclude that it would not be possible for an
arbitrarily high price to sustain. If the price is too high, other ven-
dors will enter the market and sell the same product cheaper. Thus,
market forces will ensure that only a fair price is charged.

How To Unblock Youtube At School

Proxy Sites
Why do people want to unblock Youtube? Youtube sucks. Rather unblock something useful, like Wikipedia. Seriously. But anyway, here's how you can get on any website at school (yes, including Youtube, you addict)

Why do people want to unblock Youtube? Youtube sucks. Rather unblock something useful, like Wikipedia. Seriously. But anyway, here's how you can get on any website at school (yes, including Youtube, you addict)... you need something called a...
Proxy
New proxies get blocked all the time, so you really need to keep finding new ones when that happens. Here is a proxy you can try:
www.haoproxy.com

The trick is to find a proxy that very few people know about. Most of the ones you'll find by searching on the internet will get blocked, because that's the first place a teacher will look when they want to know what sites they should block. So, the best way to unblock sites (yes, and Youtube... sigh) is to use other methods...

Other ways to unblock sites
Use Google's cached version of the site. This will let you read just about any site, but you can't log in to member sites (like Youtube or whatever). This may or may not also work with other search engines, like Yahoo and whatever else there is.
Translator sites can be used to access pages much like the search engine cache trick. Example: babel.altavista.com

You can use RSS readers to access news sites instead of going to the site itself. www.bloglines.com is just one, but there are millions of these.

You can use "web accelerators" to unblock sites too. Like webaccelerator.google.com

Unblocker Programs
UltraSurf is free software and is available in English and Chinese. Once started, it opens an Internet Explorer type program that is automatically configured to allow you to browse websites through UltraSurf. Other browsers must be configured manually. The UltraSurf site itself might be blocked, so you may have to download this at home and bring it to school on a flash drive or something.
Similar unblockers:
FreeGate
GPass
HTTP Tunnel
Jap Anon
TOR
I2P


Set up your own proxy
This is the most reliable way to unblock sites... but you have to do it yourself. Basically, you set up your own proxy on your home computer or on a web host somewhere, and use it yourself. Don't tell anyone about it, except maybe a close circle of friends. The more people who know about it, the more likely it will get blocked. But hey, if you can make one proxy, you can make more of them, so... :)
How to make a proxy
Install Apache and PHP on a computer that has permanent access to the internet. Download and install a PHP script called PHProxy (you'll have to Google for it, its owner stopped developing it). You can also use CGIProxy. Then, as long as you know your IP address, you can get to your proxy from anywhere. You can also set up DNS for it if you want, but that just makes it easier to block. So, if your IP is 123.456.78.9, you'll access your proxy by typing in http://123.456.78.9 at school.
psiphon turns a regular home computer into a personal, encrypted proxy server. psiphon is free and open source, and comes in Linux and Windows versions. It is easy to install, but if your computer is behind a home router it may require some configuration.

Peacefire Circumventor is similar to psiphon, but a bit harder to install.

They don't just block Youtube
Schools have been known to block a lot of educational and downright useful sites too. In fact, the software they use to block things with is extremely ineffective at blocking access to anything (as this article has just proven). For more information about internet censorship, go to Peacefire.org. Researchers have found that commercial filtering technologies mistakenly block access to content related to women’s health, gay and lesbian rights groups, and sexual education for teenagers. Everyone's Guide to By-Passing Internet Censorship for Citizens Worldwide

Don't forget to tell your friends about www.haoproxy.com!

Natural Learning

A natural approach to home schooling.
Natural learning is the basis of a relaxed, obligation free lifestyle in which children are free to reach their potential and parents are present to enjoy the process.

Natural Learning is not just a style of education, it's a lifestyle! It's about facilitating and supporting your children to make their own life choices, follow their dreams develop their potentialities and enjoy that journey. It's a process of being, discovering, creating, inspiring, sharing, enjoying and appreciating each other...

The control over what, when and how things are learned belongs to the individual child. Parents, siblings and others may provide a source of inspiration but ultimately the individual child makes their own assessments
about themselves and the world around them.

Natural Learning goes hand in hand with a natural parenting style that is based on principles rather than rules. It's almost like an extension of attachment parenting and compassionate parenting techniques...

The ultimate aim of educating and parenting in this way is to grow emotionally healthy adults with a high self esteem and a high regard for all living things.
Adults who;

* explore their passions
* make their own decisions based on a nonbiased evaluation of circumstances
* possess the skills to expand on their skillset, gain new knowledge and pursue goals
* maintain healthy relations with others
* are socially confident
* act from a peaceful place in their hearts.

Home-based education effectiveness research demonstrates that children are usually superior to their school-attending peers in social skills, social maturity, emotional stability, academic achievement, personal confidence, communication skills and other aspects.
"Curiosity is as much the parent of attention, as attention is of memory. Whately, Richard"


Natural learning is not compatible with ironhanded classrooms or rigid curriculum. Learning cannot be measured by multiple choice tests. Natural learning is basically an enjoyable thing to do. It is the learning that people do every day of their lives. Natural learning is, and will always remain, the most important form of learning.

People have powerful natural mechanisms for learning that allow them to master an enormous volume and variety of material during their lifetimes. Adults imposing their ideas of curriculum and lesson plans on children and young people is not an effective method of teaching. Learning, that is truly natural, will be the result of the child's own development and own motivations.


Learning implies growth, and growth implies the realization of an inner pattern of design and harmony. It is a natural process that every child experiences in his/her own unique way. Learning to learn is the most fundamental learning of all.

Learning processes can be both planned and opportunistic. Schools fail to educate because they don't leverage the natural learning process. What is the attitude of students towards classroom learning? How do schools fail to ignite the natural learning process? Can we make schools conform more closely to natural learning? How does a fixed curriculum inhibit learning?

Isn't it scary that many school-age children associate learning with fear of failure...
"My education was dismal. I went to a series of schools for mentally disturbed teachers. "Woody Allen.


Unlike their formal schooling couterparts, natural learners interact with people of all ages, cultures, religions, and race. Natural learning happens every day regardless of school days or terms. Developing a broader view of the world through a variety of social situations encourages creativity, empathy and lateral thinking in kids.

Education for adaptability through 'self-empowerment is 'holistic education'. The wonder is that it has taken so long for society to notice that education is stymied in those institutions. Education and training should be constructed so that learners are confronted with many new useful experiences that will be valuable to recall in the future.

How does formal education differ from childhood learning?

Cramming for an exam or trying to please a teacher ought not to be the goal of those seeking an education. The initial reasons for establishing universal schooling were more about social factors involved in producing a working class for the new industrial world of the 18th and early 19th Century. The kinds of social skills (obedience, deference, and unquestioning behaviour) and the education production function (didactic instruction) needed for factories aren't those needed for the post-industrial age of today. Self-motivation, self-direction and self-instruction are critical along with the broad generic skills of communication, information management, problem solving, team-working, and lateral-thinking that are highly sought after by employers.

"I have never let school interfere with my education."Mark Twain.


Education research demonstrates that the the learning environment of home is a better catalyst of educational success than the learning environment of school. Intrinsic motivation to learn is far more important in the long run than extrinsic motivation (such as through exams). School bears no relation to the real life situations of the workplace or home environment.

"Education is, not a preparation for life; education is life itself." Holistic education is education for development of human potential. Inthe paradigm of 'holistic education' the function of the effective teacher or 'soul educator' is defined in terms of the 'facilitation of learning'. Individual human development depends on education which provides the right conditions for the facilitation of learning. That which engages the person as a whole and their instinctive motivation for growth is exactly the kind of education neccessary to prepare the leaders of tomorrow for the catastrophic social and problems they face.

Children learn when they are ready, making it very easy to pick up the knowledge. Children will absorb much more because they are learning by their own desire to know. Naturally modelling positive examples set by their parents, is the most natural form of learning, easily observed in young children. It is clear that the interesting action, the stuff that comprises a child's mental life in school, is about interaction with other children in one form or another. Learning, that is truly natural, will be the result of the child's own development and own motivations. This is the way children learn to walk and talk. The environment of the home encourages kids to maintain a higher level of concentration for longer without the pressures of constant social stress.

Natural learning offers unparalleled opportunities to capture learning moments and turn them into meaningful and enduring knowledge. Trusting ourselves as parents and as mothers – our intuition and our personalities are important in our homeschooling adventure.

Each of us should be free to learn in our own way and our own time.


The term 'unschooling' was coined back in the sixties when John Holt, trying to promote school reform, learned about homeschooling. John Holt came up with the word unschooling to describe learning that was diametrically opposed to that usually used in the institutional setting. There has been a lot of debate about unschooling and many heated discussions with people taking sides for and against unschooling. I don’t believe there needs to be such adamant pro and con debate about unschooling. One of the basic premises of unschooling or natural learning is that of letting the child follow his interests. I think of unschooling as tapping into the inner structure of the child rather than imposing an external structure. This varies so much from each individual and family, that it's hard to say "unschooling looks like this", because it will look different in each house or person as they joyfully pursue their passions.

Natural learning is what happens anyway, despite what you do. However, natural learning is so vital to a person's growth and happiness that it should never be taken for granted by educators, by parents or by anyone concerned about the growth and development of young people. If educators are going to be concerned with the development of the whole person, one of their jobs should be to determine how and when natural learning is thwarted, blocked or undermined. Maybe natural learning is kind of a religion because more than anything, it seems to be about trust and acceptance. “Natural learning is invigorating,” says author Ron Dultz, “because the learner feels a strong personal connection to what is being learned, is ripe for it and has selected it. This natural learning is always self directed. I believe that natural learning is what happens before school and after school."

In the modern world of web 2.0 and social networking via the internet, this "Natural Learning" has a new platform. Young people are doing this automatically, this has been described by author Don Tapscott in his book "Grown up Digital". (Danielle, I am new to this, please edit or delete as seems appropriate!-

Legal Issues for IT Professionals

An Introductory Overview of Legal Issues for IT Professionals in the UK
This knol provides an introductory overview of the main pieces of UK legislation that are relevant to IT professionals. It includes a brief discussion of some cross national issues, including the Gary McKinnon case, the US PATRIOT Act and the Council of Europe Convention on Cybercrime.

This knol provides an introductory overview of the main legal issues and pieces of legislation that are relevant to IT professionals in the UK. Three of the most significant ones - the Computer Misuse Act, the Data Protection Act and Intellectual Property Rights - are expanded more fully in their own independent knols.

Computer Misuse Act 1990
The Computer Misuse Act (CMA) [1] - the so-called "hacking law" - is designed to prevent unauthorised access to computer systems - these are the so called hacking laws. The Act creates three categories of offence.

1. Unauthorised access to computer material.

This deals with unauthorised access to computer systems without the intent to commit serious crime such as fraud. It is regarded as a relatively minor offence and can be dealt with in Magistrate's courts.

2. Unauthorised access with intent to commit or facilitate commission of further offences.

This deals with unauthorised access to computer systems with the specific intention of committing, or facilitating the commission, of a serious crime. This is a much more serious offence, and is dealt with at the Crown Court.

3. Unauthorised modification of Computer material.

This covers unauthorized modification of computerised information, and thus includes viruses, logic bombs, and trojans. This is also a very serious offense.

There is more detail about this Act in a separate knol - Computer Misuse Act 1990.

Data Protection Act 1998
The Data Protection Act 1998 (DPA) [2] replaces the earlier act of 1984, and is intended to implement the 1995 European Directive on Data Protection. It is designed to cover the collecting, storing, processing and distribution of personal data. The act places obligations on those who record and use personal data and it gives rights to individuals about whom information might be held. Most significantly, the Subject Access Right entitles any individual to ask for, and be given, details of any personal data about them that is being stored or processed.

The Information Commissioner [3] is an independent government authority, with responsibilities to provide information and advice in relation to the Act, and to enforce compliance with it.

There is more detail about this Act in a separate knol - Data Protection Act 1998.

Freedom of Information Act 2000
The Freedom of Information Act 2000 FOIA) [4] gives individuals the right to access information held by public authorities. It differs from the Data Protection Act in that, amongst other things, it is not restricted to personal data. It gives the individual to do two things.

* To ask any public organisation covered by the Act what information it has on any subject you specify.

* If the organisation has the information, to be given copies.


Providing the information is not legally exempt from disclosure the organisation must tell you what it has and give it you within twenty working days. In many cases, even if it withholds the information, it at least has to tell you what it has.

A private company may be affected by the Act if data on the private company is held by a public authority. This may happen when, for example, the company has had a contract to supply goods or services to the public authority. In such cases the data may be subject to the Act.

The Information Commissioner [3] is an independent government authority, with responsibilities to provide information and advice in relation to the Act, and to enforce compliance with it.
This list of BBC news stories made possible by the Freedom of Information Act is a vivid illustration of the impact that the Act has had.
Intellectual Property Rights
Intellectual property (IP) allows people to own their creativity and innovation in the same way that they can own physical property. The owner of IP can control how others use his ideas, in order to profit from them. This benefits wider society as, well as the owner, because it encourages further innovation and creativity.

There are a variety of legal rights that can be used to protect IP. These include: patents, copyright and database rights. The owner of an IP right may exploit, and benefit from, that right by a number of means.

* They may use it directly in the creation of products or services – either for their own use or for sale.
* They may license the IP right so that others may make use of it – and be paid for the license.
* They may sell the IP right to a third party.


The most common method of protecting computer software is copyright. The copyright holder sells the user a license to use the software. The user is allowed to use the software but never owns it.

The UK Intellectual Property Office [5] contains detailed information about a wide range of IP rights. The principal legislation on IP protection in the UK can be found in the Copyright, Designs and Patents Act 1988 [6].

There is more detail about intellectual property rights in a separate knol - Intellectual Property for IT Professionals.

Health and Safety at Work
The Health and Safety at Work Act 1974 [7], and related legislation, imposes rights and responsibilities in relation to safety in the workplace.

* It is an employer’s duty to protect the health, safety and welfare of their employees, and other people who might be affected by what they do.
* It is an employee’s responsibility to take reasonable care of their own health and safety, and that of others who may be affected by what they do or do not do.


The Act provides for protection against - e.g. - bullying and harrassment, as well as the more obvious physical aspects of health and safety.

The Health and Safety (Display Screen Equipment) Regulations 1992 [8] provide specific regulations relating to the use of display equipment and computer workstations. The Regulations require employers to minimise the risks in VDU work by ensuring that workplaces and jobs are well designed.

The Health and Safety Executive [9] is an independent body whose job is to protect people against risks to health or safety arising out of work activities.

Public Interest Disclosure Act 1998

The Public Interest Disclosure Act 1998 [10] - the so called whistle blowers law - protects workers who raise concerns over any of the following malpractices at work:

* a criminal offence
* the breach of a legal obligation
* a miscarriage of justice
* a danger to the health and safety of any individual
* damage to the environment
* deliberate covering up of information tending to show any of the above


The Act protects whistle blowers from being victimised or sacked as a result of their whistleblowing. Although the Act provides protection there are still risks for the wistleblower and this is not something that should be undertaken lightly. Martin (1999) [11] gives detailed information and advice to anyone who is considering becoming a whistleblower.

WorldWideWhistleBlowers [12] provides a "forum and informational source for those brave individuals who would like to go public with evidence of actions contrary to the public good".

Defamation Act 1996
The Defamation Act 1996 [13] makes it an offence in the U.K. to disseminate defamatory statements, including any via e-mail or on a bulletin board. The same act allows a defence of innocent dissemination, which recognises that there is no offence if you don't know that you're disseminating such statements. This means that, for example, an internet service provider may not be responsible for defamatory materials published on his server.

Consumer Protection (Distance Selling) Regulations 2000
A range of UK laws apply to the sale of goods, regardless of whether that sale is completed in person, by mail order, or via the internet. Most of them are only applicable to the UK. Internet-based sales are usually treated in the same way as ‘mail order’. If you are buying from companies based in the UK the Consumer Protection (Distance Selling) Regulations 2000 [14] apply. The key features of these regulations are:

* The consumer must be given clear information about the goods or services offered.
* After making a purchase the consumer must be sent confirmation.
* The consumer has a cooling-off period of 7 working days, during which time they may cancel their order.


A Cross National Perspective
Other nations have laws that parallel the UK legislation described above. They may not always have the same names and they may not be exactly equivalent in every detail, but there is frequently a lot of overlap. Due to the global nature of communication technologies, it is increasingly important to be aware of the situation beyond the UK.

Council of Europe Convention on Cybercrime

The Council of Europe Convention on Cybercrime [15] deals with crimes - involving infringements of copyright, computer-related fraud, child pornography and violations of network security - committed via computer networks. It aims to promote international co-operation towards a common criminal policy aimed at the protection of society against cybercrime.

The list of signatories [16] to the Convention includes France, Germany, UK, USA and Japan.
The United States PATRIOT Act 2001
The PATRIOT Act - Providing Appropriate Tools Required to Intercept and Obstruct Terrorism - was part of the United States' response to the 9/11 attacks. Amongst other things, this legislation strengthened the US computer misuse laws to include:

"a computer located outside the United States that is used in a manner that affects inter-state or foreign commerce or communication of the United States"

Worries have been expressed that this may be interpreted as applying to data that simply passes through the USA. Many of the Acts provisions had a sunset clause, which meant that they would have ceased to be law in 2005. In the months preceding the sunset date, supporters of the act pushed to make its sunsetting provisions permanent. They largely succeeded, and the Act was reauthorised in 2005.

The Gary McKinnon Case
In November 2002 Gary McKinnon, a UK citizen, was arrested on suspicion of hacking into US military computer networks the previous year. He had allegedly used computers loacted in the UK to hack into US computers, without physically visiting the US.

Mr McKinnon was originally arrested under the UK Computer Misuse Act, a crime for which he might reasonably have expected community service sentence. Unfortunately for him - as it turned out - the Crown Prosecution Service did not charge him.

In 2005 the United States government began extradition proceedings. If extradited to the US, Mr McKinnon faces up to seventy years in prison. He is contesting the extradition, arguing that the alleged crimes were committed in the UK and so he should face trial in the UK rather than the USA.

Web3.0

Web 1.0 Web 2.0 Web 3.0

-1-Computer shortly

The society -1-Computer in a few words..

* 1991 - foundation (Sion, Switzerland)
* 1994 - adoption of the Linux platform
o freeware based solution
* 1997 - realisation of the first Internet sites
* 1999 - hosting of Internet sites
* 2001 - development for mobile Internet
* 2002 - application of Web (XHTML 1.1) standards
* 2003 - 1Work CMS (mobile, PDA, UMPC, PC)
* 2005 - Artemis (Tracability for wine cellars)


Web shortly

Some historical dates ...

* 1990 - Software WorldWideWeb (Nexus) for NeXT
* 1993 - NCSA Mosaic
o multiplateform solutions, more stable
* 1995 - Domination of the market by Netscape and 1st release of MS Internet Explorer
* 1998 - Mozilla ends in 2003
* 2000 - Release of Konqueror (KDE). Domination of the market by MS Internet Explorer
* 2002 - Apparition of FireFox based on mozilla. 9th Nov. 2004 Ver. 1.0
* 2007 - Internet Explorer 7, FireFox 2.0, Prism (Mozilla labs)
* 2008 - Release of FireFox 3.0, Fennec (mobile), ...


Themes

* How to situate on the Web ? Which future ?
* How to remain platform hardware independent (terminals, mobiles, plasma screens) ?
o Easily publish on several support : from the small mobile screen to the huge plasma screen

Methodology - Graphical

* Internet is a spiderweb, the Web is a rainbow....
o There is a huge number of colors, but we only see a small part of them, which change depending on the means used !

*


Web 1.0 Characteristics

How to define the Web1.0 ?

* statical pages, sometimes dynamical
* change possible for the WebMaster only
* unitary web pages : texts, images, links

STATICAL....

* The webmaster uses one or several software to modify the data.
* Internet user do not contribute directly to the data changes

Web1.0 Graphically

* Web1.0: Statical pages, text, WebMaster,....
o Publication on several supports either the small mobile's screen or on a standard screen (resolution).
+ Statical pages, statical mobility.


Web 2.0 Characteristics

How to situate Web2.0 ?

* dynamical sites, sometimes statical
* changes of the content by the Webmaster and users
* site solution web (software)

DYNAMICAL....

* The webmaster do not use softwares to modify the datas.
* The webpage reader can contribute to the datas.


Web 2.0 Models
Web resources ?

* Web2.0 or Web2.c for users
* Model with free market (Advertisements)

Other...

* Web2.b for business (intranet, extranet)


Web2.0 Graphically

* Arrival of the Web2.0. Web User(s), Web community
* The web becomes software, tool, sharing....
o Upgrade to the Web2.0 offers a much broader offer of services


Web3.0 Themes

* How do the web evolves ? Which future ?
* Burst of technologies:
o technical and software (Ajax, Flash, 3D, ???, ...)
o hardware : mobiles, PDA, UMPC, PC, large screens,.....
(nokiaN810 5' , eeepc 7',...)
o Network: WiMax (centrino2), Uwb,...
o Web Object: Datamatrix, RFID, ...


Web 3.0 characteristics 1 of 2

How to define the Web3.0 ?

* Mobility: each kind of hardware, screen, printer
* Universality: for every browser
* Accessibility: Web's standards -> databases
* Application Solution software Web SaaS


Web 3.0 Characteristics 2 of 2

Web based solutions. (ASP, SaaS, Software, Application....We say not Web pages)

* 3 actors:
o The webmaster
o Users
o Web3.0 database servers , microformats
* With evolution and diversity, hardware will take a major role with the Web3.0


Web3.0 Graphically

* Release of the Web3.0. User(s), communities, mobility (hardware-software)
* Web becomes information : databases (xml, rss,...), micro formats, semantics,...
o Respect of standards to communicate between sites, transit of informations, Open Source

Rupture of the Web1.0

* Client becomes an actor. (Knol, Blog, CMS Wiki,...)
* Users are actors
o Wikipedia, social networks, Second Life,...
o Google Writer, Mahalo, Wikia...

Rupture of the Web2.0

* The web becomes an engine and an actor. Hardware becomes "transparent"
* The web solution allows to modifiy the tool.
* We create the web by the web.....
o 1Work:
+ Individual creation of databases
+ Automatic creation and anihilation of documents
+ Creation of forms
o Artemis:
+ Integrated programming language
+ display...
+ filters, requests
+ entry (data input), output : display or printer....


Web3.0 Graphically

* Locate a webpage or solution on this representation.
o Vertical: mobility; hardware and software.
o Horizontal: user, communities, databases

Web2.0 Web3.0 Graphically

* On this representation what are the possible sites ?
* Determine other situations...
* What should be there for Web2.0? Web3.0?


Web 3.0 Applications
Definitions

* The Web by the web independence at software and hardware levels
The web in every place and on every device.

Case study

* www.1computer.info/1work/
* www.1computer.info/artemis/


Conclusion

* In 3 points :
o Web1.0 : OS layer independent
o Web2.0 becomes software
o Web3.0 is software (engine and development) + microformats
+ hardware layer independent
* Finance: Release of the Web3.0 will finally take out the web from the unique
advertisement solution B2C...
* The use of the Web in the administration, trading and industry
will take a much more important position, maybe even a dominant one.
* Technical solution: « liquid structure »
o In the Web3.0 hardware plays one more role : it becomes transparent. The code becomes open source.

How Google Works ?

Now this question can be easily answered in 5000 words. Books have been written on this subject. But the point is who wants to know in deep? All that matters is some interesting facts to impress your friends.

There's a beautiful flash that describes google working process.[To view click here]

The flash gives a summery of the whole book in 2 minutes audio-visual entity.Really interesting to see.

If you aren’t interested in learning how Google creates the index and the database of documents that it accesses when processing a query, skip this description.


Google runs on a distributed network of thousands of low-cost computers and can therefore carry out fast parallel processing. Parallel processing is a method of computation in which many calculations can be performed simultaneously, significantly speeding up data processing. Google has three distinct parts:

1. Googlebot, a web crawler that finds and fetches web pages.

2. The indexer that sorts every word on every page and stores the resulting index of words in a huge database.

3. The query processor, which compares your search query to the index and recommends the documents that it considers most relevant.
Let’s take a closer look at each part.



1. Googlebot, Google’s Web Crawler

Googlebot is Google’s web crawling robot, which finds and retrieves pages on the web and hands them off to the Google indexer. It’s easy to imagine Googlebot as a little spider scurrying across the strands of cyberspace, but in reality Googlebot doesn’t traverse the web at all. It functions much like your web browser, by sending a request to a web server for a web page, downloading the entire page, then handing it off to Google’s indexer.
Googlebot consists of many computers requesting and fetching pages much more quickly than you can with your web browser. In fact, Googlebot can request thousands of different pages simultaneously. To avoid overwhelming web servers, or crowding out requests from human users, Googlebot deliberately makes requests of each individual web server more slowly than it’s capable of doing.

Googlebot finds pages in two ways: through an add URL form, www.google.com/addurl.html, and through finding links by crawling the web.


Unfortunately, spammers figured out how to create automated bots that bombarded the add URL form with millions of URLs pointing to commercial propaganda. Google rejects those URLs submitted through its Add URL form that it suspects are trying to deceive users by employing tactics such as including hidden text or links on a page, stuffing a page with irrelevant words, cloaking (aka bait and switch), using sneaky redirects, creating doorways, domains, or sub-domains with substantially similar content, sending automated queries to Google, and linking to bad neighbors. So now the Add URL form also has a test: it displays some squiggly letters designed to fool automated “letter-guessers”; it asks you to enter the letters you see — something like an eye-chart test to stop spambots.

When Googlebot fetches a page, it culls all the links appearing on the page and adds them to a queue for subsequent crawling. Googlebot tends to encounter little spam because most web authors link only to what they believe are high-quality pages. By harvesting links from every page it encounters, Googlebot can quickly build a list of links that can cover broad reaches of the web. This technique, known as deep crawling, also allows Googlebot to probe deep within individual sites. Because of their massive scale, deep crawls can reach almost every page in the web. Because the web is vast, this can take some time, so some pages may be crawled only once a month.

Although its function is simple, Googlebot must be programmed to handle several challenges. First, since Googlebot sends out simultaneous requests for thousands of pages, the queue of “visit soon” URLs must be constantly examined and compared with URLs already in Google’s index. Duplicates in the queue must be eliminated to prevent Googlebot from fetching the same page again. Googlebot must determine how often to revisit a page. On the one hand, it’s a waste of resources to re-index an unchanged page. On the other hand, Google wants to re-index changed pages to deliver up-to-date results.

To keep the index current, Google continuously recrawls popular frequently changing web pages at a rate roughly proportional to how often the pages change. Such crawls keep an index current and are known as fresh crawls. Newspaper pages are downloaded daily, pages with stock quotes are downloaded much more frequently. Of course, fresh crawls return fewer pages than the deep crawl. The combination of the two types of crawls allows Google to both make efficient use of its resources and keep its index reasonably current.

2. Google’s Indexer

Googlebot gives the indexer the full text of the pages it finds. These pages are stored in Google’s index database. This index is sorted alphabetically by search term, with each index entry storing a list of documents in which the term appears and the location within the text where it occurs. This data structure allows rapid access to documents that contain user query terms.
To improve search performance, Google ignores (doesn’t index) common words called stop words (such as the, is, on, or, of, how, why, as well as certain single digits and single letters). Stop words are so common that they do little to narrow a search, and therefore they can safely be discarded. The indexer also ignores some punctuation and multiple spaces, as well as converting all letters to lowercase, to improve Google’s performance.

3. Google’s Query Processor

The query processor has several parts, including the user interface (search box), the “engine” that evaluates queries and matches them to relevant documents, and the results formatter.
PageRank is Google’s system for ranking web pages. A page with a higher PageRank is deemed more important and is more likely to be listed above a page with a lower PageRank.
Google considers over a hundred factors in computing a PageRank and determining which documents are most relevant to a query, including the popularity of the page, the position and size of the search terms within the page, and the proximity of the search terms to one another on the page.
Google also applies machine-learning techniques to improve its performance automatically by learning relationships and associations within the stored data. For example, the spelling-correcting system uses such techniques to figure out likely alternative spellings. Google closely guards the formulas it uses to calculate relevance; they’re tweaked to improve quality and performance, and to outwit the latest devious techniques used by spammers.
Indexing the full text of the web allows Google to go beyond simply matching single search terms. Google gives more priority to pages that have search terms near each other and in the same order as the query. Google can also match multi-word phrases and sentences. Since Google indexes HTML code in addition to the text on the page, users can restrict searches on the basis of where query words appear, e.g., in the title, in the URL, in the body, and in links to the page, options offered by Google’s Advanced Search Form and Using Search Operators (Advanced Operators).

Let’s see how Google processes a query.


Isn't it Interesting ? BTW the name ‘Google’ was an accident. A spelling mistake made by the original founders who thought they were going for ‘Googol’ ...

IPv6 The Next Generation Internet Protocol

INTRODUCTION

Internet is the global network that consists of interconnection of millions of computers. The connections between these computers are supplied by a list of rules which is called ‘Internet Protocol’, shortly ‘IP’. IP, which is a member of TCP/IP protocol suit, is the protocol that describes how data is send across networks. This protocol was initially designed to response limited specific requests. However, due to exponential growth of Internet, the current version of IP has gradually become a bottleneck for the future of Internet. As a result, transition to a new flexible and powerful protocol is unavoidable. This new protocol is called IP version 6 (IPv6).

This report will be about the next generation Internet Protocol, IPv6. The purpose of this report is to inform reader about current situation of Internet Protocol, necessity of transition to IPv6, features of IPv6 and transition strategies for IPv6. Since IPv6 is a high approach of engineering study, therefore includes lots of technical details, only key issues of IPv6 will be presented. This report will be beneficial for researchers who are interested in computer networking.

The four parts of this report discusses (1) definition of Internet Protocol, (2) necessity of IPv6, (3) features of IPv6, and (4) transition to IPv6. The first section describes Internet Protocol and a brief history of IP. The necessity of IPv6 section discusses bottlenecks of current protocol and why IPv6 is needed. In the following section, some of features of IPv6 will be explained. The final section gives information about transition period, time of transition, current studies for transition and strategies that will be followed in order to not experience troubles.
INTERNET PROTOCOL OVERVIEW

Before explaining structure of IPv6, its features and necessity of transition to IPv6, it is beneficial to mention what IP is, and its history.
What is IP?

Internet Protocol (IP) is one of the most important cornerstones of Internet structure that provides connection between any peers that are connected to Internet. Technically, as Kozierok (2004) states, “The Internet Protocol is the primary OSI network layer (layer three) protocol that provides addressing, datagram routing and other functions in an internet work”.
History of IP

History of IP started more than 20 years ago with development of a research network in United States Defense Advanced Research Agency (DARPA, or ARPA). This network, named ARPAnet, may be considered as grandfather of Internet, and it was operating on a number of protocols called Network Control Protocol (NCP). Later, Transmission Control Protocol (TCP) was used for this network. According to writers of HistoryoftheInternet.com (1999) “Transfer Control Protocol (TCP), outlined in a 1974 paper by Kahn and Cerf, was introduced in 1977 for cross-network connections, and it slowly began to replace NCP within the original ARPAnet”.
Internet Protocol firstly defined in Request For Comment document (RFC) 791, in 1981. The name IP version 4 would imply that there were earlier versions of IP, but in fact weren’t. Until version 4 of TCP, functionalities of IP were performed by TCP but there was no distinct protocol named IP and with version 4, TCP was splitted into two parts TCP and IP. In order to provide consistency same version number was applied IP, too. This means, IPv4 is actually IPv1 which was defined in RFC 791.

NECESSITY OF IPv6
Limitations of Current Protocol

As mentioned before, IPv4 which is the current version in use is the only version that was deployed and has not changed since RFC 791, which was published in 1981. However, it was designed only focusing on small experimental network and today’s growth of Internet was not considered. After two decades Internet become a widely used popular communication tool. This popularity caused to reach structural limits of IP.

The most important property of IP is its number allocation system that assigns a number to everyone (Karadere, n.d.). In theory, with its 32-bit addressing structure, IPv4 provides 4,294,967,296 IP numbers. However, as Yeğin (2005) states, due to inefficient number allocation mechanisms, active address amount can never reach this level. In order to use this limited address space more efficiently, many technologies such as Classless Inter-Domain Routing (CIDR), Point-to-Point Protocol (PPP) and Dynamic Host Configuration Protocol (DHCP) have been developed. However, according to Kozierok (2004) they only helped to postpone exhaustion of address space. Eventually, Network Address Translator (NAT) technology was included into structure of Internet as a patch for address limit.

NAT is system that allows privately addressed hosts to connect Internet over same public IP address.

A Burn to IPv6

The current address space is not capable to satisfy the exponential growth of Internet. Although, NAT and other technologies have extended the life time of IPv4, these techniques can not be complete solution for the future of Internet. Furthermore, some problems such as limited address space are structural problems that can not be fixed. That means a new flexible version of IP is the only solution must be considered. Therefore, in the 1990s, Internet Engineering Task Force (IETF) has started working on a new powerful protocol called IP Next Generation Protocol (IPng), later named IP version 6 (IPv6).

Due to version number ‘6’ it might be asked ‘What happened to IP version 5?’. The version number 5 was given to Internet Stream Protocol (ST) which “was created for the experimental transmission of voice, video, and distributed simulation. Two decades later, this protocol was revised to become ST2 and started to get implemented into commercial projects by groups like IBM, NeXT, Apple, and Sun.” (Krikorian, R., 2003).
Evolution of IP
According to Kozierok (2004), the primary motivating factor in creating IPv6 is necessity of a larger address space. Furthermore, together with fixing problems of IPv4, decision of a new protocol “made sense to use opportunity to make as many as improvements as possible”. These important enhancements are listed in Table 1, which is a comparison of IPv6 with IPv4.

IPv4


IPv6

Source and destination addresses are 32 bits (4 bytes) in length.


Source and destination addresses are 128 bits (16 bytes) in length.

IPSec support is optional.


IPSec support is required.

IPv4 header does not identify packet flow for QoS handling by routers.


IPv6 header contains Flow Label field, which identifies packet flow for QoS handling by router.

Both routers and the sending host fragment packets.


Only the sending host fragments packets; routers do not.

Header includes a checksum.


Header does not include a checksum.

Header includes options.


All optional data is moved to IPv6 extension headers.

Address Resolution Protocol (ARP) uses broadcast ARP Request frames to resolve an IP address to a link-layer address.


Multicast Neighbor Solicitation messages resolve IP addresses to link-layer addresses.

Internet Group Management Protocol (IGMP) manages membership in local subnet groups.


Multicast Listener Discovery (MLD) messages manage membership in local subnet groups.

ICMP Router Discovery is used to determine the IPv4 address of the best default gateway, and it is optional.


ICMPv6 Router Solicitation and Router Advertisement messages are used to determine the IP address of the best default gateway, and they are required.

Broadcast addresses are used to send traffic to all nodes on a subnet.


IPv6 uses a link-local scope all-nodes multicast address.

Must be configured either manually or through DHCP.


Does not require manual configuration or DHCP.

Uses host address (A) resource records in Domain Name System (DNS) to map host names to IPv4 addresses.


Uses host address (AAAA) resource records in DNS to map host names to IPv6 addresses.

Uses pointer (PTR) resource records in the IN-ADDR.ARPA DNS domain to map IPv4 addresses to host names.


Uses pointer (PTR) resource records in the IP6.ARPA DNS domain to map IPv6 addresses to host names.

Must support a 576-byte packet size (possibly fragmented).


Must support a 1280-byte packet size (without fragmentation).

Table 1:Differences between IPv4 and IPv6.


FEATURES OF IPv6
Although this report mostly pays attention to IPv6 addressing, IPv6 Header and Address auto-configuration, beyond these features, IPv6 also provides some additional benefits. These benefits are summarized by Enterasys Networks, Inc. (2004) as follows:

* Simplified header format for efficient packet handling.
* The streamlined IPv6 header provides more efficient processing at intermediate routers.
* Hierarchical network architecture for routing efficiency.
* Auto-configuration and plug-and-play support.
* Elimination of the need for Network Address Translation (NAT) and Application Layer Gateways (ALG).
* Embedded security with mandatory IPSec implementation. End-to-end security can be accomplished by deploying IPSec.
* Enhanced support for Mobile IP and mobile computing devices.
* Better support for Quality of Service (QoS). QoS is natively supported in IPv6.

IPv6 Addressing

Larger Address Space
The major factor of designing a new protocol was limited address space; therefore, the main goal of IPv6 is its large address space. It might be expected that new protocol would increase size from 32 to 48 or 64 bits. However, design of IPv6 extends size to 128 bit, which theoretically makes 340,282,366,920,938,463,463,374,607,431,768,211,456 (≈3.4 x 1038) addresses. Although the main concept of 128 bit addressing is to make sure that it will not consume again, “the relatively large size of IPv6 address is deigned to be divided into hierarchical routing domains that reflect the topology of the modern-day Internet” (Davies, 2003, p46).

A New Representation
Due to its large size, instead of ‘dotted decimal notation’, for IPv6, ‘colon hexadecimal notation’ was preferred. In order to keep size down, it is allowed to omit leading zeros and compress contiguous zero valued blocks. The figure below shows representation of an Ipv6 address in different notation. Also it illustrates how zero compression is applied. It should be noted that “::” notation can be used only once and it expresses that all values between two colons (:) are zero.

IPv6 Address Types
As in IPv4, version 6 also supports three address types, however, with some remarkable changes. These three types are 1) Unicast, 2) Multicast and 3) Anycast addresses. Unlike IPv4, as Kozierok (2003) states, “there is no distinct concept of a broadcast address in IPv6”. Functionality of broadcast addressing is performed in IPv6 by multicast addresses. On the other hand, concept of Anycast addresses is special to IPv6 and it will be discussed in the next sections.
1- Unicast Addresses
Global Unicast Address:
Global Unicast addresses are identified with having their first three digits as “001” and correspond to a full 1/8 fraction of the complete size of IPv6 address space. As Davies (2003) describes IPv6 global unicast addresses are equivalent to public IPv4 addresses which are globally routable and reachable on the whole Internet.
The large size of IPv6 address supplies remarkable flexibility to create various hierarchic addressing schemes. However, in unicast addresses, always last portion is fixed to be 64bits to be used as interface identifier. In IPv6, modified Extended Unique Identifier (EUI)-64 addresses are used to represent IPv6 interface ID of all global unicast addresses. Benefit of this representation stated by Kozierok (2003) is that, it makes networks easier to administer, because just one number for each host needed to be recorded.
The Figure 3 illustrates process of deriving a 64-Bit IPv6 Modified EUI-64 Interface Identifier from a standard MAC address

Site-Local Address:

Site-local addresses are implementations of IPv4 private addresses in IPv6. These addresses are supposed to be used within a site or an Intranet and are not forwarded to public Internet. In hexadecimal notation they start with “FEC”, “FED”, “FEE” or “FEF”. Format of site-local addresses is

Link-Local Address:

Link-local addresses start with “FE8”, “FE9”, “FEA” or “FEB” notations. An IETF worker Hinden (1995) explains that link-local addresses “are designed to be used for addressing on a single link for purposes such as auto-address configuration” or neighbor discovery.

Special IPv6 Address Types

■ Loopback Address

The loopback address (0:0:0:0:0:0:0:1 or ::1) is equivalent to IPv4 loopback address of 127.0.0.1. This special address is used for testing device by sending packets to itself.

■ Unspecified Address

According to Juniper Networks’ Routing Protocols Configuration Guide the unspecified address (0:0:0:0:0:0:0:0 or :: ) “indicates the absence of an IPv6 address. For example, newly initialized IPv6 nodes may use the unspecified address as the source address in their packets until they receive an IPv6 address.”

Compatibility Addresses

Compatibility addresses are designed to provide a soft transition to new protocol. Some of these addresses are defined as follows:

■ IPv4-compatible IPv6 addresses: These addresses are assigned to ‘dual stack’ devices which can work with both IPv4 and IPv6. Designed to be have 96 zeros followed by an IPv4 address. (0:0:0:0:0:0.212.156.4.4 or simply ::212.156.4.4 ).

■ IPv4-mapped IPv6 addresses: are formed as 0:0:0:0:0:FFFF.a.b.c.d or by zero-compression ::FFFF.a.b.c.d and they are used to map nodes that are only capable of IPv4.

■ 6over4 IPv4 addresses: Format of this type addresses is [64-bit prefix]:0:0:AABB:CCDD where AABB:CCDD is hexadecimal notation of IPv4 address of a.b.c.d. 6over4 addresses are used for tunneling mechanism.
2- Multicast Addresses

Like in IPv4, in IPv6 multicasting is used to provide send packets to multiple recipients. However, “IPv6 nodes can listen to multiple multicast addresses at same time. Nodes can join or leave a multicast group at any time” (Davies, 2003, p58).

The general structure of an IPv6 multicast address is shown by Figure 6. Scope of a multicast packet is determined with 4 bits length scope field illustrated in Figure 6. In addition, flag value (000T) is with T=0 indicates that multicast address is permanently assigned, but if T=1, it is non-permanently assigned.
3- Anycast Addresses

As Weber and Cheng (2004) describe, anycast addressing is “a new one-to-one-of-many communication method” (p.127). A packet send to an anycast address is routed the interface that is easiest to reach – in routing terms. In application of this property, it provides flexibility of load sharing between routers and finding best server to use.

Weber and Cheng (2004) draw attention to the fact that possibilities of anycast addressing have just been touched on and researches on anycast addressing will continue in the future.
IPv6 Header
Another important feature of IPv6 is its new header structure. Unlike the variable size of IPv4 header, the main header of IPv6 is fixed to be 40 bytes. This is achieved by removing unnecessary fields and placing additional (optional) information into extension headers. Due to this structure as Davies (2003) states, in contrast to 20 bytes minimum-sized IPv4 header, the new IPv6 header is only 40 bytes length. However, the new IPv6 header contains source and destination addresses that are four times longer than IPv4 source and destination addresses (p.93).

The Figure 7 shows IPv4 header and IPv6 main header format. It demonstrates that IPv6 main header has simpler format than IPv4 header. According to one expert from Enterasys Network Inc., this simplified header format provides more efficient packet handling.

Another significant point is that processing issues are done by routers have been reduced from 6 to 4. For example, IPv6 routers will not do fragmentation. As a result, streamlined IPv6 header is more efficiently processed at intermediate routers (Davies, 2003, p7).
Address Auto-Configuration

Address auto-configuration protocols such as Dynamic Host Configuration Protocol (DHCP) ease network management because network administrators do not have to manually assign address to each host. For instance, DHCP server in a network maintains addressing table. Considering this table every host in a network is assigned an IP address by DHCP server. Due to existence of this ‘stated’ table, this type of configuration is called ‘stateful address configuration’. Like IPv4, IPv6 also supports stateful address assignment with a new version of DHCP. Furthermore, IPv6 supports ‘stateless’ auto-configuration, which enables ‘plug-and-play’ Internet connection. According to Kozierok (2004) the idea behind this feature “is to have a device generate a temporary address until it can determine the characteristics of the network it is on, and then create a permanent address it can use based on that information”.
TRANSITION
Transition Period

Transition to IPv6 is expected to take a long time because implementation of new protocol requires remarkable preparation efforts in various sectors. Furthermore, due to growth and importance of Internet connectivity, it is impossible to make migration happen as a ‘plug-and-play’ process for entire Internet. However, transition issues need be done under special care and attention.

According to one expert from Cisco Systems (2003), IPv6 networks have existed since 1996 and by the end of year 2001 Internet Service Providers (ISP) started deploying new protocol in order to provide IPv6 services to their customers. However, consumer adaptation of IPv6 services is expected to continue up to year 2010. On the other hand, it may take many decades in order IPv4 networks to be completely disappeared.
Deployment Strategies

In order to achieve a smooth and healthy integration of IPv6 into existing networks, IETF proposed variety of transition mechanisms. These mechanism are come under three general forms 1) dual-stacking, 2) tunneling and 3) translators. Key issue of these mechanisms to assure the coexistence of both protocols and interoperability of IPv6 networks with existing IPv4-based infrastructures (Enterasys Networks, 2004).
Dual-Stack Mechanism

Dual-Stack devices are ones that maintain both IPv4 and IPv6 protocols. According to Carmés (2002) dual-stacking “enables networks to support both IPv4 and IPv6 services and applications during the transition period in which IPv6 services emerge and IPv6 applications become available.” He also states that an IPv4 address must be assigned for every dual-stack machine. Since IPv6 was developed precisely due to the scarcity of IPv4 addresses, this extra need of IPv4 address may be annoying.
Tunneling Mechanism

In general tunneling mechanisms allows interconnection of separate IPv6 networks over IPv4 based services. However, later as amount of IPv6 networks increase, tunneling IPv4 over IPv6 will be needed. One expert from Cisco Systems indicates following tunnel mechanism will be used during transition period:

o IPv6 Manually Configured Tunnel
o IPv6 over IPv4 GRE Tunnel
o Automatic IPv4-Compatible Tunnel
o Automatic 6to4 Tunnel
o ISATAP Tunnel
o Teredo Tunnel

Figure 8 illustrates an IPv6 tunneling demo topology prepared by IP Infusion Inc. and Foundry Networks with their own products. The figure illustrates how IPv6 hosts communicated with each other over IPv4 clouds.


Protocol Translation Mechanism

Different from the cases dual-stacking and tunneling if there is no common protocol between peers, i.e. one device is IPv4-only and other is IPv6-only device, protocol translators are used to provide connection between these peers. However, it is advised to not use protocol translators when it is not obligatory because some technologies such as IPSec can not work with Network Address Translation-Protocol Translators (NAT-PT).

According to Waddington and Chang (2002) following protocol translation mechanisms are under consideration:

o Network Address Translation-Protocol Translation (NAT-PT)
o Bump-in-the-Stack (BIS)
o Multicast Translator Proxying
o Transport Relay Translator (TRT)
o Bump-in-the-API (BIA)
o SOCKS-Based Gateway

Current Situation in the World

Although address exhaustion is a global problem, implementation of IPv6 networks evolves at different geographies at different rates. One expert form Enterasys Networks (2004) states that this is because “the lack of address space in Asia is a key driver, and such countries, like China, Korea and Japan will migrate to IPv6 more quickly than countries in Europe and North America. While the lack of address space is not so great issue in the United States”. For instance, China has started it transition to IPv6 with development of CERNET2 (China Education and Research Network). CERNET2 is now being called the biggest network running IPv6. On the other hand, in USA, Department of Defense (DoD) claims to complete their transition to IPv6 by year 2008.

CONCLUSION
In this report I have tried to explain main concepts of IPv6, its features and deployment strategies. I have shown that, beyond being a solution to limited address space of IPv4, IPv6 provides additional benefits. Most of these benefits are related to 128-bit hierarchic addressing and its astronomically large address space. Furthermore, I have mentioned how migration to IPv6 will be done. I have emphasized that since IPv6 is an evolution of IP but not revolution, transition to IPv6 will continue over a period of time. Any company planning to implement IPv6 in their network should consider that even though IPv6 mostly took shape, some features still continue changing. In addition, such companies should benefit from experiments of other organizations or companies that are completed their transition to IPv6.

Sunday, April 05, 2009

How-To locate IP-Address of any website...

Including this you can vestige IP-address figures and IP forward location (also in case of big satellite descriptions if needed) of any websites or web server hosts. We suffer to say elite thanks to Google ground for manufacture this satellite likeness tracing possible.

This is how it can be done:

1) Go to start-->Run...

2) Type cmd or thorough knowledge to get the charge induce &

3) Typography tracert (and at that moment in that case website name, e.g.: www.google.com)

Which starts tracing route to the website? As the tracing is done see where it says google.com which is normally at the last line of print. The IP speak to is subsequently to that.

Note despondent the IP and go to http://www.ip-adress.com/ipaddresstolocation/

At that time category in the IP in the ground provided and click Lookup this IP or website badge

That's it!! Your remnant is successful.

Note: If you be after to see big satellite the way you are seen of IP beneath suggestion at that time click on
See a big IP dispatch satellite image


Update:
-----------
Ip-adress.com has modernized its IP Whois feature, which became very popular.

Now if you yet hunted to be familiar with who is the possessor of an IP address, area or host? Only class it in here,

IP Whois - Now even better.

Edit Audio & invent Your Own Ringtone with AUDACITY


Setting you own ringtone is the way to add up to your mobile phone as only one of its kind as you are. Likewise using the duck ringtones provided in your portable phone or with person’s large audio documents tranquil by others as ringtones, now you have the chance to rearrange any obtainable ringtone or even create your own ringtone.

Ringtone editing software comparable Cool change Pro, Sound counterfeit etc., bestow their users the facility to not solitary edit the existing ringtones as a consequence addition their own aptitude but can in addition turn into them create their own ringtone by their picture perfect features.

'Audacity' also waterfall in the matching kind of audio ringtone editing and recording software which is forcible and easy-to-use. It is an unlocked foundation software with the purpose of wires almost all in use systems counting windows, Mac OS X, GNU/Linux etc.,

Some of the skin texture with the purpose of Audacity provides include:
* tape sentient audio.
* Convert tapes and records addicted to digital recordings or CDs.
* Tidy up Ogg Vorbis, MP3, WAV or AIFF hard files.
* Cut, copy, seam or mix sounds together.
* Amendment the alacrity or pitch of a CD

Click now to Download Audacity 1.2.6 (.exe file, size~2.1MB) for Windows 98/ME/2000/XP/Vista

At this point is the step-by-step process of how-to amends any audio file counting your darling ringtones and exchange them to your likings:

This feature comes into use especially when you want simply an ingredient of the audio file as your ringtone (or) your touchtone phone memory is too low to accommodate the full sized ringtone/audio file.

1) Start Audacity and from menu bar options Go to File-->Open... and locate the audio row (supported formats are Ogg Vorbis, MP3, WAV or AIFF) which you wanted to correct and click open.

This motivation import that audio file hooked on Audacity

followed by just clicking and dragging on the audio road (in blue) you can clearly set not here boundary and right boundaries for your audio sleeve and thus opt for no more than the audio you want to play.
You can click on take part in from hegemony toolbox to listen to your selected audio already confirming to pare the audio troop and except it.


Past you exhibit lucratively selected your preferred audio range Go to correct on or after menu bar opportunity and exclusive trim. You can what's more use the keyboard shortcut Ctrl+t. This choice fit your audio sand according to your selection and your audio road is now not here in addition to the trimmed audio.

At any time you can save your project in AUP plan which is the default Audacity Project format.

Now you are all set to export your edited tracks (or) saved projects hooked on suitable audio formats. Audacity with no trouble support to export your tracks keen on WAV and Ogg Vorbis formats.

But what just about MP3 which is mainly popular way to pile music??




As the MP3 encoded algorithm is patented and cannot be officially worn in unbound programs, Audacity does not encode MP3 library precisely but it can understand new MP3 encoders that you can download separately.

All you encompass to do is obtain the correct MP3 encoder for your computer and so therefore show Audacity where it is located.

As a Windows user you take to download LAME MP3 encoder (click to download) and install it which stores lame_enc.dll line interested in path everywhere you cover installed it and it follows that you give birth to locate this rasp for Audacity.

Audacity choice automatically prompts you to locate your MP3 encoder when you try to export to an MP3 rub for the initial time. You only require to locate the beyond dll file once.


Now your can by a long way export store to MP3 and conserve them including desired propose on to your PC and use them as your ringtones.

To boost the trait and add additional effects to your ringtones or trimmed audio library you have bundle of skin tone which you can entire commencing Effects menu.

Wish this Helps...

Try it yourself and situation am waiting for your effective comments.

Fast Shutdown in Windows Vista or XP

For Vista Users

Here's a neat one I fell larger than someplace on the web:

Not built up Regedit (WINDOWS KEY + R on your keyboard, or by clicking inaugurate and typing on the pursuit bar "Regedit" without quotes.

Hit go into

Plot a course to [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control] on the right panel glimpse for the "WaitToKillServiceTimeout" sequence and change it assessment to 1000 by moral clicking the series and selecting Modify...

Evasion merit WaitToKillServiceTimeout=20000
Customized value WaitToKillServiceTimeout=5000
Personalized consequence WaitToKillServiceTimeout=1000 (extreme, use it at your own risk)

I start 3000 worked nicely but try the 5000 first. 3000 does on occasion set down some programs lagging which at that moment neediness an extra key click.

Additional way is to birthright Click on Desktop and conceive new shortcut and set its Target at-

shutdown.exe -s -t 00

For XP Users

Open Task boss by critical Ctrl + Alt + Del
Go to Shutdown menu and click whilst pressing Ctrl.

Saturday, April 04, 2009

Future look: Now surf with Super Fast internet.


The internet might soon be ended obsolete. The scientists who pioneered it grasp now built a lightning-fast surrogate capable of downloading complete feature films within seconds.

At speeds around 10,000 times faster than a normal broadband connection, the grid determination is capable to send the entire Rolling Stones hindmost catalogue from Britain to Japan in fewer than two seconds.

The latest spin-off commencing CERN, the particle physics centre with the purpose of created the web, the grid may possibly also afford the considerate of engine capacity considered necessary to transmit holographic images; allow instant online gaming together with hundreds of thousands of players; and agreement high-definition videotape telephony for the estimate of a local call.

David Britton, professor of physics at Glasgow University and a leading chart in the grid project, believes grid technologies may possibly revolutionize society. by way of this kind of computing power, future generations hope against hope partake of the power to act as a team and make an announcement in habits older citizens resembling me cannot level imagine,


The strength of the grid will become plain this summer after what scientists at CERN take part in termed their day - the switching-on of the significant Hadrons Collier (LHC), the new particle accelerator built to poke about the origin of the universe. The grid hope against hope is activated at the same spell to capture the data it generates.

CERN, based on the verge of Geneva, ongoing the grid computing plan ahead seven living ago when researchers realized the LHC would cause annual records the same to 56m CDs - as much as necessary to make a stack 40 miles high.

This destined with the aim of scientists at CERN - everywhere Sir Tim Berners-Lee sham the web in 1989 - would no longer be able to use his conception for be frightened of causing a total collapse.

This is since the internet has evolved by between together a hotchpotch of cables and routing equipment, to a great extent of which was originally designed for call calls and therefore lacks the capacity for high-speed data transmission.

By contrast, the grid has been built by out-and-out fiber optic cables and up-to-the-minute routing centers, meaning there are no outdated workings to slow the surge of data. The 55,000 servers by now installed are expected to come out to 200,000 surrounded by the then two years.

Professor Tony Doyle, procedural director of the grid project, said: We should so to a large extent processing power, here would still be an circulation about in receipt of adequate electricity to run the computers if they were all at CERN. The only answer was a new set-up strong sufficient to send the facts immediately to examination centers in other countries.

With the purpose of network, in effect a matching internet is now built, using fiber optic cables so as to run from CERN to 11 centers in the United States, Canada, the Far East, and Europe and in the world.

One terminates at the Rutherford Appleton laboratory at Harwell in Oxford shire.

Starting all centre, further connections glow with out to a host of other do research institutions using existing high-speed hypothetical networks.

It revenue Britain by yourself has 8,000 servers on the grid regularity so that any undergraduate or theoretical desire theoretically be competent to hook up to the grid to a certain extent than the internet from this autumn.

Ian Bird, project head for CERN’s high-speed computing project, said grid equipment could make the internet so gaining with the intention of people would stop using desktop computers to put in storage in order and entrust it all to the internet.

“It will lead to what’s established as cloud computing, somewhere people keep all their in a row online and entrance it beginning anywhere,” he said.

Computers on the grid can additionally transmit figures at lightning speed. This motivation let researchers facing heavy processing tasks to dub on the assistance of thousands of other computers in the order of the world. The aim is to eliminate the dreaded qualified by internet users who ask their apparatus to handle too a good deal information.

The unfeigned goal of the grid is, however, to work among the LHC in tracking along nature’s most elusive particle, the Higgs boson. Predicted in theory but not at all yet found, the Higgs is believed to be pardon? Gives question mass.

The LHC has been considered to hunt out this particle - but level at optimum deed it will make only a few thousand of the particles a year. Analyzing the mountain of statistics long for be such a great commission that it determination curb steady the grid’s huge scope full of activity for being to come.

Even if the grid itself is unlikely to be in a straight line available to domestic internet users, scores of telecoms providers and businesses are already introducing its original technologies. One of the most potent is so-called dynamic switching, which creates a committed direct for internet users demanding to download obese volumes of facts such as films. In notion this would give a paradigm desktop PC the ability to download a motion picture in five seconds instead than the stream three hours or so.

Additionally, the grid is life form ready accessible to dozens of extra academic researchers plus astronomers and molecular biologists.

It has before now been old to improve fabricate new drugs against malaria, the mosquito-borne disease that kills 1m community worldwide every year. Researchers old the grid to analyze 140m compounds - a task to facilitate would enjoy taken a standard internet-linked PC 420 years.

Projects like the grid preference bring enormous changes in custom and society as well as science,

Holographic cassette conferencing is not with the intention of far away. Online betting could evolve to contain loads of thousands of people, and common networking can become the foremost way we communicate.

The history of the internet shows you cannot predict its real impacts but we be familiar with they strength of character be huge.

Windows 7 is on the way for 2009 release

Microsoft is tender onward and procedure to launch Windows 7 this year, although the company still refuses to publicly commit to with the intention of goal.

PC conscientiousness sources in Asia and the U.S. tell CNET news flash that they have heard possessions are on road to launch by this time celebration shopping season, which has been Microsoft's internal goal for some time.

Microsoft is besides putting the concluding touches on a program to tender panorama buyers a free or low-cost update to Windows 7. That syllabus could kick off as early as July, sources said.

The ballet company has run such "technology guarantee" programs in the past, typically allowing every one PC maker to set the exact rules, but for the most part submission buyers subsequent to a selected time to get a free of charge upgrade to the then version. (Tec harp has a job with even added fine points on Microsoft's deliberate Windows 7 Upgrade Program.)

In an interview at the Consumer Electronics Show in January, Microsoft senior VP Bill Vegetal cautioned with the intention of the relief still can be hard-pressed addicted to 2010, depending on customer feedback.

"I'm telling them with the aim of it possibly will go either way," Vegetal alleged in so as to January interview. "We will craft it whilst the worth is right, and prior is continuously better, but not at the loss of ecosystem defend and not at the rate of quality."

That remains the company's bureaucrat position, although the wheels are spinning near a leave go of in stage for Windows 7 machinery to be sold this holiday season, PC conscientiousness sources narrate CNET News.

The response to adversity versions of Windows 7 has been in utterly contrast along with the issues that rigid Windows Vista, which was a much more essential keep posted to the operating system. Though Windows 7 adds things in the vein of an improved taskbar and snappier performance, the operating system shares most of the invariable underpinnings as Windows Vista. (Click on the video at right to be made aware me talk Windows 7 on CNET Editors' Office Hours.)

Microsoft has reiterated that it tactics just an unmarried beta for Windows 7. With the aim of beta launched in January and Microsoft this week stopped offering downloads of the test version. The concert party has assumed it command have a near-final "release candidate" version, but has not understood as that force come.

past this month, Microsoft fixed with the intention of it plans to retail at least six divergent versions of Windows 7, even if it in addition alleged it want focus its hard work about two editions--Windows 7 Home Premium and Windows 7 Professional. (By way of comparison, Microsoft announced the diverse versions of Vista in February 2006 before finally making the code free to interest customers in November 2006).

I Love Linux coz ....

http://farm3.static.flickr.com/2031/2428594983_fe30642b19_o.jpg



A. Linux has better hardware support than Windows. As a matter of fact, I'll go out on a limb and say Linux has better hardware support than any other system out there. Does it support everything? No, but neither does Windows.

B. Linux has excellent installers. The problem is that people don't want to wait for the application to become available to them in an easily installable format. Case in point. A friend of mine installed Linux and openoffice.org. But the latest openoffice.org was just released and he wanted to run it now.

Windows users are a little spoiled in that it is the norm to automatically include an installer for most Windows applications (mostly because Windows doesn't even come with the ability to compile for most users).

As long as you don't NEED (who really needs this anyway) to use the latest and greatest bleeding edge version, then you're just fine.

Personally I would like to see a unified installer for Linux, and all Linux apps be made available for install on all distros the day it's released. Some day that will happen, at least for Desktop distros, but I think we're still some way off from that.

FYI, at work, all Linux distros don't even install any applications. They install a base system, and all needed applications are linked into the system. So they are all installed and upgraded centrally, and run locally. Try THAT with Windows.

Friday, April 03, 2009

How to Earn Money Online ?

Online currency is required for assorted transactions such as purchasing a premium account for rapid share or any further remunerated online file distribution services. This online money can be credited in to your Pay Pal or an Alert Pay Account. There are a number of habits to earn cash online exclusive of investing still a single penny. Here I have programmed a little of the websites which offer such services.



Blogging:
Blogger is the assistance untaken by Google in which you can create your Blog for free of charge and put your thoughts on InterNet. You can Sign Up for Google Adsense and Adbrite and other Ad networks to earn money through your website. The more is the passage on your blog, the more you hope against hope earn. It also helps in increasing your social Network. This is not an on the spot Way to Earn currency but formerly you are started including earning you can get a fantastically princely quantity of currency in lengthy run.



Earn by Clicking Ads Online:
Nearby are many websites which offer earning change by clicking ads one of these is Bux.to which pays you for referrals as well. This site just wishes your 30 seconds to surf each ad and quantity will be credited instantaneously.


Surveys:
Many websites offers online Surveys which include AwSurveys.com. On this website you may get an enormously handsome amount of money a short time ago by writing 3 to 4 appearance vis-à-vis some of scheduled websites. This website also pays for referrals you motivation get $1.25 for a track referral. This is the much unproblematic and undemanding way then receiving money from a blog. This website wishes and pays $6.00 for a new precursor up in variety of pleasant survey, in which you just have to write few lines about known website


Earn capital by receiving SMS: Earning money for uninhibited using your cell phone! Now it’s straightforward to earn funds using your mobile phone. You determination receive ads about various military of your interest plus you desire furthermore get wealth for viewing that SMS. Such assistance is offered by mGinger .Just signpost up in attendance and takes pleasure in without charge money.


Earn capital by Receiving E-mails:
Similar all above publicity military this is one more advertising repair in which you will get capital to regard ads via E-mail. These services are untaken by scores of companies instance You Mint and Rupee Mail

Top Portable Applications you should have

Here of top 10 portable applications that are free. A portable application is one that can be used in a portable device like a pendrive.
  1. Mozilla Firefox: Certainly Mozilla is the best web browser. So you can download it and use that on your portable device.
  2. VLC Player: This player can play almost all sorts of video formats.Actual size of 16 Mb but you can get it here at very less size.
  3. WinRar: Anywhere you go,you are really independent to extract files if you have this utility.
  4. click below to read more...
  5. Foxit Reader An alternative to adobe reader so that you can view all your pdf files.
  6. Winamp A music addict will know about the usefulness of this utility.
  7. OpenOffice To do all your MS Office work ,you should have this.
  8. FS Capture to edit the photos or to take screenshots, this is an awesome utility with a file size of only 1 Mb
  9. utorrent This one is for all torrent freaks.
  10. Miranda This is an IM client.Carry it if you want to chat with all of your friends.
  11. NVU To work with HTML , you ought to have this.
Now, what I think is that I have covered all the best softwares from each field and provided a handy, portable solutions to each of them. Readers comments are invited so that any further improvement can be made.

Increase Your Google PR in Just 10 Steps

10 Steps To swell Your Google PR

At hand is no surprise at the back of the triumph you can possess by mounting your Google PR, but the constant sly is how you can do so. in attendance is an array of sundry behavior you can spread your Google PR, but resolution and via a amalgamation of more than a few methods motivation benefit you climb up the examination engine and foster your pr faster. So, here's how to heighten your Google PR.

1. Connection swap:
If you relationship to additional sites with the intention of are comparable to yours, your Google PR resolve come out of as of the multiple family you experience pointing assist at your site. If you bring together to sites unrelated to yours, Google yearn for reprimand you.

2. At ease:
The kindly of please you be the source of goes a lengthy way. If you can agreement cool and tempting contents to your readers, new than expected you want get visits go again pedestal to see what did you say? To boot you arrange to offer, which Google loves.

3. Forums:
Relocation in forums is a good way to increase your expertise crossways the Web and publicize your crop and concern at the constant time. A lot of forums let signatures everywhere you can bond your location and pass on ancestors to it, which in turning will create extra transfer and improve your Google PR.

4. Blogs:
Having a blog coupled through your website is practically central today. How to grow your Google PR as of blogs is by consistently relocation new dreams and feelings each day, donation opinion and answering questions, and evidently promoting your business.

5. Editorial copy:
Item prose is one of the finest behaviors out here to increase in intensity your Google PR. You be inflicted with the gift to distribute your knowledge, use keyword optimization, distribute the piece to thousands of piece directories, and swell your website relationship from beginning to end the source box.

6. Record reviews:
As wild as it may sound, your Google PR can and willpower go up if you jot down reviews on anything. Whether it is on movies, books or sports games, the key is subtly insertion the network to your website in it.

7. Member programs:
Member programs can intensify your Google PR, but it can what's more hurt it as well. It is vital with the purpose of you sign up a profitable connect instruct so as to correlates through your website; or else you strength of character see your Google PR drop.

8. Open:
All you produce to do is put up the statement gratis and fill with are the moment interested. By carriage a liberated e-book out in the course of an auto-responder or released articles for a week, you long for produce a far above the ground size of travel honorable to hear the released item.

9. Newsletter:
A newsletter is a excessive way to foster your Google PR as it keeps visitors tangled and endlessly keeps them up to time amid I'm sorry, is obtainable on in your business.

10. Keyword optimization:
Keyword optimization is roughly speaking the large amount fractious way to multiply your Google PR, but it can potentially cause it the most. By targeting limited keywords right through your total website and in all of your matter and articles, you yearn for leisurely get up to the top of engines.

The Art of Conversation - How to talk to People ...

Many observers of modern society complain bitterly that the
art of conversation has been irretrievably lost in the
United States.

Yet on closer examination, we discover that the art of
conversation is quite alive and well in America. Only its
rules of engagement have changed from what they were a
century or two ago. The "art of conversation" has always
managed to adopt itself to the times and mores of society.

In eighteenth century England, Samuel Johnson quipped
dryly: "Questioning is not the mode of conversation among
gentlemen." It was considered quite rude to confront
someone with a question in "polite conversation."

Today, questions politely phrased indicate a high degree of
interest in the speaker and are used to propel the
conversation forward.

Women during Victorian times were expected to engage in
conversations that addressed only a few light subjects. The
weather was a favorite. It rarely raised heated debate,
which was to be shunned at all costs.

Today, women appear to be as free as men to indulge
debating any topic of interest. Consider the thousands of
chat rooms, forums and blogs on the Internet with exchanges
on virtually any topic you can imagine!

Conversation is the foremost means of self-expression of
all people. It provides a means of transmitting knowledge
from one generation to the next. Conversation creates
self-confidence, and enables us to build trust among
people. Let's define exactly what we mean by the phrase,
"art of conversation."

An "art", according to Merriam Webster's Dictionary, is "a
skill acquired by experience, study or observation." A
"conversation" is "an oral interchange of sentiments,
observations, opinions or ideas." So, "the art of
conversation" could be said to be a "skillful exchange of
opinions"

Just how do we go about becoming masters of the art of
conversation?

1. Try to be comfortable, both physically and
psychologically, as you enter into a conversation. If
either of you is uncomfortable, the conversation is likely
to be stilted and artificial.

To become a master at the art of conversation, try to make
the other person as comfortable as you yourself would like
to be.

2. Try to find out something interesting about your
partner. Whether the conversation is one struck up between
two perfect strangers on a train, or with your life-long
best friend, trying to get to know that person better is a
key strategy to be used in good conversation.

Asking how someone feels is a great first step in providing
the basis of that comfort and security.

3. Be credible! A master of the art of conversation will
always support his or her opinions with a goodly amount of
information that can be easily verified. Credibility builds
trust, and trust leads to the highest level of
communication.

4. Try not to interrupt the other person. This one is key!
It's just plain rude and often results in argument, the
least desirable form of communication.

5. Use questions, instead of making statements. Questions
involve a response that will carry the conversation forward
naturally. Flat statements are often considered threatening.

These easy steps are the key cornerstones for learning to
become a master of the modern art of conversation. This is
true whether you might be chatting on the Internet, dining
in a fine restaurant, or simply enjoying the company of
good friends.