Tuesday, July 26, 2011




Sunday, July 17, 2011


EXAM DATE IS 25-9-11

                                          1 : SET EXAM TIPS AND PATTERN
                                          2. SET SYLLABUS
                                          3.TIPS TO PASS UGC CSIR EXAMINATION(NET-JRF)

Saturday, July 16, 2011


Chandrayaan-2 will be India's second mission to the Moon lanuched by Indian Space Research Organisation (ISRO) in 2013. It is the second unmanned lunar exploration mission proposed by the ISRO at a projected cost of Rs. 425 crore (US$ 90 million. The mission includes a lunar orbiter as well as a Lander/Rover. ISRO plans to land a motorised rover on the Moon likely in 2012, as a part of its second Chandrayaan mission. The wheeled rover will move on the lunar surface, to pick up soil or rock samples for on site chemical analysis. The data will be sent to Earth through Chandrayaan II, which will be in lunar orbit.
History of Chandrayaan-2
The Indian Government approved this mission on 18 September 2008. On November 12, 2007, representatives of the Russian Federal Space Agency (Roskosmos) and ISRO signed an agreement for the two agencies to work together on the Chandrayaan-2 project. In November 2007, the Indian Space Research Organisation (ISRO) and Russia's Federal Space Agency(Roskosmos) have signed an Agreement on joint lunar research and exploration. ISRO will have the prime responsibility for the Orbiter and Roskosmos will be responsible for the Lander/Rover.
Specification of Chandrayaan-2
Chandrayaan II will consist of the spacecraft itself and a landing platform with the Moon rover. The rover will weight 58 kg, with six wheels and running on solar power. It is of Russian design and will land near one of the poles and will survive for a year, roving up to 150 km at a speed of 360 m/h. The platform with the rover will detach from the orbiter after the spacecraft reaches its orbit above the Moon, and land on 
lunar soil. Then the rover will roll out of the platform. It will be launched on India's Geosynchronous Satellite Launch Vehicle (GSLV) around 2011-12 time frame. The craft will most likely be launched by 2013. The Chandrayaan 2 is expected to be operational in the next two years, after the life of the existing Chandrayaan 1 almost comes to an end.
Chandrayaan-2 Design
The initial phase of the Chandrayaan design is completed in Aug, 20, 2009. The Chandrayaan 2 mission will consist of a lunar rover that will be dropped on to the lunar surface once the craft reaches the 
lunar orbit. The main duty of the rover would be to collect and analyze lunar soil and transmit the data back to the data centers on earth. Pertinent to note here is the fact that while the spacecraft would be indigenously made, the lunar rover will be made by Russia. Since the design phase of Chandrayaan-2 is completed, we would be seeing a prototype of the Chandrayaan 2 soon. In the next two years, work on the spacecraft is expected to continue at a steady pace till the day of the launch.


Thursday, July 14, 2011


In Ancient India language Sanskrit "Chandrayaan" means "Moon Craft" or Moon Vehicle. Chandrayaan-1 is India's first mission to the Moon launched by India's national space agency the Indian Space Research Organisation (ISRO). In Chandrayaan-1, the lunar craft launched using Polar Satellite Launch Vehicle (PSLV) weighing 1304 kg at launch and 590 kg at lunar orbit. Lunar craft would orbit around moon 100 km from moon surface. The unmanned lunar exploration mission includes a lunar orbiter and an impactor. India launched the spacecraft by a modified version of the PSLV, PSLV C11 on 22 October 2008 from Satish Dhawan Space Centre, Sriharikota, Nellore District, Andhra Pradesh about 80 km north of Chennai. The mission is a major boost to India's space program, as India competes with Asian nations China and Japan in exploring the Moon. The vehicle was successfully inserted into lunar orbit on 8 November 2008.

Succesful Launched of Chandryaan 1
On November 14, 2008, the Moon Impact Probe separated from the Moon-orbiting Chandrayaan at 20:06 and impacted the lunar south pole in a controlled manner, making India the fourth country to place its flag on the Moon.[8] The MIP impacted near the crater Shackleton, at the lunar south pole, at 20:31 on 14 November 2008 releasing subsurface debris that could be analysed for presence of water ice.
Cost of the Chadrayaan 1 Project
The estimated cost for the Chandrayaan 1 project is Rs. 386 crore (US$ 80 million).
Features of Chandrayaan 1
The remote sensing lunar satellite had a weight of 1,380 kilograms (3,042 lb) at launch and 675 kilograms (1,488 lb) in lunar orbit and carries high resolution remote sensing equipment for visible, near infrared, and soft and hard X-ray frequencies. Over a two-year period, it is intended to survey the lunar surface to produce a complete map of its chemical characteristics and 3-dimensional topography. The polar regions are of special interest, as they might contain ice. The lunar mission carries five ISRO payloads and six payloads from other international space agencies including NASA, ESA, and the Bulgarian Aerospace Agency, which were carried free of cost.


Wednesday, July 13, 2011


The planet Mercury is the closest to the Sun and is now the smallest planet in our Solar System. The temperatures on Mercury range from 700 degrees Kelvin on the sunlight side to 90 degrees Kelvin on the night side. Mercury orbits the Sun once every 88 days and rotates on its axis once every 58 days. The orbit of Mercury is very elliptical and brings it as close to the Sun as 46 million kilometers and as far away from the Sun as 70 million kilometers. Since it is so close to the Sun, Mercury can only be seen from Earth during the early morning or evening twilight.
We can observe Mercury from the Earth using both optical and radio telescopes, but much of what we know about Mercury is the result of three fly bys performed by theMariner 10 spacecraft during the 1970s. The Mariner 10 only photographed 40 to 45 percent of the surface of Mercury and the rest has never been seen up close. The photographs the Mariner 10 did send back reveled a rocky, cratered surface similar to the Earths own Moon.
Mercury is about 4878 kilometers in diameter and this makes Mercury slightly smaller than the moons Ganymede and Titan. However, Mercury is more than twice a massive thanks to its relatively high density that is second only to the Earths. This high density is the result of Mercurys inner structure which has a relatively large iron core that may be all or partly molten. The large iron core also generates a weak magnetic field about 1 percent as strong as the Earths. Despite being weak this magnetic field allows Mercury to maintain a very thin atmosphere in what is called the magnetosphere. The magnetic field does this by deflecting the solar wind.
Up until 1962 it was belived that Mercury rotated on its axis once each time it orbited the Sun. This would mean that one side of Mercury would always face the Sun the same way one side of Earths Moons always faces the Earth. Doppler radar observation conducted in 1965 showed this is not so. Mercury actually rotates on its axis three times during the course of two of its orbits around the Sun. This has some rather odd effects especially when combined with Mercurys highly elliptical orbit. If you were standing on Mercury you would see the Sun rise and then grow larger in size. The Sun would then stop in its journey across the sky and reverse its course. After back tracking a ways the Sun would then stop again and resume its original course. The Sun would then appear to shrink in size and drop below the horizon.
The orbit of Mercury gets even stranger as a result of being to close to the Sun where the Suns gravitational field is incredibly strong. As Mercury orbits the Sun the point where Mercury starts a new orbit moves slightly. This is called the precession of perihelion and it can not be explained using Newtonian physics. For a while it was hypothesized that another planet, that was even named Vulcan, was exerting its gravitational pull on Mercury and this was causing the precession of perihelion. This was proved false and the existence of Vulcan was dismissed when Albert Einsteins General Theory of Relativity provided a better explanation.
At present, Mercury is the least studied of the planets but that will soon change. On August 3, 2005 NASA launched a new mission to Mercury named MESSENGER which stands for Mercury Surface Space Environment geochemistry and Ranging. The MESSENGER spacecraft will make three fly bys of Mercury in January 2008, October 2008 and September 2009. The MESSENGER spacecraft will then settle into orbit around Mercury in March 2011. Japan and the European Space Agency are also planning a joint mission to Mercury called BepiColombo which will arrive at Mercury in the year 2019. These spacecraft will use a variety of scientific instruments to tell us more about all aspects of the planet Mercury.

Monday, July 11, 2011


Venus is the second planet from the Sun and is the sixth largest. It is the brightest object in the night sky except for the Moon. Venus orbits the Sun once every 224.7 Earth days and gets as close to the Sun as 107.476 million kilometers and as far away from the Sun as 108.942 million kilometers. This makes the orbit of Venus less elliptical and more circular than any other planets. The temperature on the surface of Venus can reach as high as 740 kelvin. This is due to a phenomenon called the greenhouse effect whereby carbon dioxide in the atmosphere of Venus traps the Suns heat inside. This makes Venus the hottest planet in the Solar system. Venus is even hotter than Mercury despite being farther away from the Sun.
Venus is 12,100 kilometers in diameter and has a mass of 4.869e+24 kilograms. This makes Venus similar to the Earth and has often been called earths sister planet. But the similarities end there. One major difference between the Earth and Venus is that Venus rotates on its axis from east to west, which means if you lived on Venus you would see the Sun rise in the west and set in the east. The atmosphere on Venus is mostly carbon dioxide choked with sulfuric acid and has a pressure at the surface more than 92 times the pressure at sea level on Earth. Unlike the Earth, Venus does not have a magneticfield generated by its iron core. This may be the result of how slowly Venus rotates on its axis. The only magnetic field Venus has is very weak and is produced by the interaction of the solar wind and the ionosphere of Venus.
The surface of Venus is difficult to see through the thick, dense clouds and the first crude images of the surface were obtained using ground based radar. More detailed images were obtained by the Magellan spacecraft which was launched to Venus on May 4, 1989 and spent four and a half years radar mapping 98 percent of the surface of Venus. Later, the European Space Agency launched the Venus Express on November 9, 2005 and on April 11, 2006 it slipped into a polar orbit around Venus. These probes have now provided us with an accurate map of Venus.
Most of the surface of Venus is relatively flat plains created by giant pools of lava. Venus has thousands of small volcanoes and hundreds of large volcanoes many of which are over 100 kilometers in diameter. There are fairly large craters scattered at random all over the surface of Venus. These craters are more than 2 kilometers wide and smaller craters do not exist because smaller meteors burn up in the thick atmosphere of Venus. The map of Venus is dominated by two large highland areas, the Ishtar Terra, where the Maxwell Montes, the highest mountain can be found and the Aphrodite Terra highlands.
More missions to Venus are planned for the future and NASAs MESSENGER spacecraft just completed two fly bys of Venus in October 2006 and June 2007 while on its way to Mercury. A spacecraft called BepiColombo, which was launched by the European Space Agency, will also perform two fly bys of Venus on its way to Mercury. Japan is planning to launch the Planet-C Venus climate orbiter in 2010 and NASA has proposed a spacecraft called VISE the Venus In-Situ Explorer which will actually land on Venus. Once on the surface the Venus In-Situ Explorer will take a core sample and examine it. These mission to Venus will tell us more about the chemical composition and climate on Earths sister planet.


KARIAVATTOM - 695 581, THIRUVANANTHAPURAM - Ph. 0471 2308167
The Department of Optoelectronics, University of Kerala, was established in 1995 and
offers M. Tech. and M. Phil, courses. Also this department has a very strong Doctoral Research
programmes in the areas of Laser Technology, Fiber Optics and Fiber Optic Sensors, Photonic
nanomaterials, Laser spectroscopy, Holography and Speckle Interferometry and Laser remote
sensing. This department has a well equipped M.Tech./ Research laboratories and a very good
reference library.
M.Tech course offered:
Electronics and Communication (Optoelectronics and Optical Communication) - Approved by
Two years (Four semesters under credit and semester system).
Total numbers of seats : 18, sponsored category : 5 , SC/ST category : 3 and General : 10
{among the general seats, reservations will be given to OBC/ Physically Handicapped / BPL
(forward community) also}.
At least a second class B.E / B. Tech. or equivalent degree with 55% marks in Electronics /
Electrical and Electronics / Electronics and Communication Engineering, Applied Electronics and
Instrumentation or M. Sc. Degree in Physics / Applied Physics / Electronic Science or Electronics
of the University of Kerala or equivalent. Minimum marks in the qualifying examination for the SC
/ ST candidates are 50%.
Admission for non-sponsored category of students will be made on the basis of GATE
score. When GATE qualified candidates are not available, admission will be given to NonGATE candidates on merit. Such admissions will be based on an entrance test conducted
by the Department and the marks secured in the qualifying examination.
Candidates for sponsored seats must have a minimum experience of two years in relevant
field and must be sponsored by the industry / training / research organizations. Maximum age for
the sponsored candidate is fixed as 45 years as on the 1
day of the year of admission. A
sponsored candidate has to remit a sponsorship fee of Rs. 15,000/- per year. Out of the 5 seats
reserved under sponsored quota, 2 seats are reserved for candidates, sponsored by Ministry of
Defence, Government of India. Along with the application, sponsored candidates should produce
experience certificate and a letter from the employer stating that the candidate is beingsponsored to get admission to M. Tech. Programme. The employer should also certify that the
candidate will not be withdrawn mid way till the completion of the course.
The  open seats available (after applying OBC / PH/ BPL reservation) will be equally
allotted to candidates belongs to M. Sc. and B.Tech stream. If sufficient numbers of candidates
are not available from one category, those seats will be allotted to the candidates from the
general list.
Fee Structure for M.Tech course: Electronics and Communication (Optoelectronics
and Optical Communication)
1. Tuition fee  : Rs. 8500/- (per semester)
2. Laboratory fee  : Rs. 1000/- {per semester)
3. Laboratory and Material fee : Rs. 2500/- (per year)
4. Library fee : Rs. 500/- (per semester)
5. Stationery fee : Rs. 1000/- (per semester)
6. Internet charges : Rs. 500/- (one time fee)
7. Learning materials : Rs. 250/- (one time fee)
8. Caution deposit: Library : Rs. 2000/- (refundable)
9. Caution deposit: Laboratory : Rs. 3000/- (refundable)
10. Affiliation fee : Rs. 400/- (one time fee)
11. Admission fee : Rs. 200/- (one time fee)
12. Sponsorship fee : Rs. 30,000/- (Can be paid in 2 equal
  installments of Rs. 15, 000/- per year)
13. Other Special fees : Rs 670/- (per year)

Application forms can be obtained from  "The Professor & Head, Department of
Optoelectronics,  Kariavattom, Thiruvananthapuram  - 695 581"  on handing over the Challan
receipt of  University cash counters  at Kerala University Office Complex Palayam,
Thiruvananthapuram or Kariavattom Campus by remitting the cost of application form and
registration fee of Rs.500/-. Candidates who desire to obtain the application form by post have to
apply along with stamped self addressed envelope with a Demand Draft of Rs.510/- in favour of
Finance Officer, University of Kerala, Thiruvananthapuram,  from SBl, SBT or District Cooperative  banks payable at Thiruvananthapuram.  Application can be also down load from
University web site: www.keralauniuversit.ac.in. Filled up application along with the attested true
copies of following certificates:-
2. Plus two
3. B. Tech./M. Tech. degree certificates and mark lists of all semesters
4. Conduct certificate
5. Transfer Certificate (TC)
6. Eligibility (for degrees other than that of University of Kerala)
7. Caste certificate (for availing reservation)
8. Income certificate (for availing reservation)
should reach The Professor & Head, Department of Optoelectronics, University of Kerala,
Kariavattom, Thiruvananthapuram - 695 581.
Application form issued to a candidate is not transferable to another one.
Last date of receipt of application is  1-08-2011
Entrance test for non GATE applicant  08-08-2011

Sunday, July 10, 2011


Mass Spectrometers are instruments which can identify the type of molecules in a sample by creating ions from the specimen molecules. These ions are then accelerated via an electric field and then passed through a magnetic field, which classifies them according to their mass to charge ratio (m/z). The act of creating ions often causes the molecules to be broken into charged fragments which are characteristic of the original substance. The mass of each fragment will be displayed in the form of a spectral plot, and then the compound's mass spectrum can be used for qualitative identification.
The process here is that the fragment masses of the molecules can be used to piece together the structure and the mass of the 'original molecule'.
The analyzing work, therefore, is that from the 'molecular mass' and the 'mass of the fragments', reference data is compared to find out the identity of the sample. It is possible to do that because each substance's mass spectrum is unique, as long as the parent mass correctly fit the output, or visa versa.
Process Description
The basic description of a mass spectrometer is that it contains a sample inlet, an ionization source, an ion accelerator, a mass focuser and a detector. More sophisticated instruments also employ some form of energy filter before the mass focuser in order to achieve more accurate mass assignments. Of course there are many variations of 'mass spectrometer' but for the lack of space here, a look at the conventional and basic mass spectrometer will be sufficient.
Samples admitted to a mass spectrometer must exist in the vapour phase, so to make sure that the sample need to be analyzed will stay as a gas, the sample inlet is kept above ambient temperature, and sometimes as high as 400º C.
The following steps illustrate:
1. Sample enters the ionization chamber, is heated and turns to gas
2. With a high voltage, a beam of electrons is accelerated
3. With the high voltage electrons, sample molecules are ionized and
shattered (producing well-defined fragments)
4. Every fragment is then travels to the accelerator as 'an
individual Particle'
5. Under the influence of the accelerating voltage, the charged
particles velocities increase in the acceleration chamber
6. The ions enter the magnetic field which only allows those of
particular charge to mass ratio to pass through. In order to
detect different masses, so that all fragments reach the detector,
the magnetic field varies. The ion collide with the detector,
amplifying the original signal, which is passed to a computer for
processing and analysis.
The output is produced in the form of an array of peaks on a chart; this is called the 'mass spectrum'. Every one 'peak' is equivalent to the value of a fragment mass. The more fragments detected with one particular mass, the more intense the peak' will be.
Output Analysis

Under certain controlled condition, each substance has a characteristic mass spectrum. That means that it is possible to identify a specimen by comparing its mass spectrum with those of known compounds. In measuring relative intensities of the mass spectra, only then can quantitative analysis be possible.
A peak in the mass spectrum representing the unfragmented molecule is called the 'parent ion or molecular ion' and is the highest detected mass, representing the molecular weight of the sample under analysis. However, it is the various other peaks observed in the mass spectrum which reveal the molecule's structure. Sometimes the hardest part during mass spectrometer analysis is finding the parent peak, and thus the molecular mass of the sample.
For modern mass spectrometry, computer hardware and software in both instrument control and spectral analyses play vital roles in obtaining the final results.


The string theory was always a very interesting subject. It captured the attention of many science reporters and SF movie producers. This part of physics is meant to discover some patterns in the universe that go beyond the atomic and subatomic particles that we can't study and understand.
All the elementary particles in the Universe are made of some strings that are under a continuous tension. Although our Universe is composed of four dimensions (up-down, front-behind, right-left and time), the strings are moving under 11 dimensions (the M theory). But the other seven left are imperceptible to us.
We can see the atoms, protons and electrons but we cannot see the strings and their vibration. String theory also contains other objects than strings, called membranes. It is said that our Universe is on a membrane which is infinite in length but very thin. The result of the collision between a membrane that contains our Universe and other that contains a parallel universe consisted in the Big Bang.
String theory is a pure mathematical hypothesis. We cannot even prove that these strings exist. The scientists tried to predict the energy interactions made by them and even the gravitational problems emitted by Einstein. Afterwards, they associated everything with a stable approach of quantum mechanics.

Quantum mechanics or quantum physics is a modern branch of physics that studies the behavior of atomic and subatomic particles. The founders of quantum mechanics were Max Plank, Erwin Schrodinger and Werner Von Heisenberg. The research on this matter was developed between 1926 and 1935.
However, the string theory is not widely accepted by all physicians. Many consider that this "theory of everything" is completely wrong. Thousands of pages were written about it, hundreds of debates took place but not a single testable event was performed, nor a prediction was solved. In their point of view, the string theory is just an untested mathematical thought rather than a real theory. Two partizan's of these ideas are Lee Smolin, the founder of the Perimeter Institute for Theoretical Physics in Waterloo, Ontario, and Columbia University mathematician Peter Woit. They both have written books regarding the inefficiency of the string theory.


Although the range of scientific endeavours that involves the use or study of nanotechnology gets bigger every year, it is difficult to find a definition which covers every aspect. In simple terms the prefix nano indicates something with extremely small dimensions and this gives us an insight into what nanotechnology is really all about. Ananoparticle size analyser using Dynamic Light Scattering technology has already been developed which can measure particles as small as <1 nm to 6 μm.
When you consider that the measurement of a metre, as defined by the International Standards Organisation, as 'the length of the path travelled by light in vacuum during a time interval of 1/299 792 458 of a second' and that a nanometre is 10- 9 of a metre you will develop an idea of the dimension in which this particular science works.
Think of electrons and the scale at which they exist and then imagine scientists who work with physical products that can only be measured in these extremely small scales and you have an idea of what nanotechnology really is. It is working with matter at the molecular level and includes the scientific ability to manufacture items from these extremely small building blocks so that they eventually become high performing products.
You get a clearer understanding when you consider the types of products that are being developed.
Nanotechnology enables scientists to build machines which have the scale of molecules, they can be a few nanometres wide, in other words smaller than a cell, yet perform functions that would normally be expected of a computer. The science-fiction notion of submarines the size of a pinhead that can travel through the human body to perform complex medical operations are closer to the truth than might have been expected when this novel idea was dreamt up.
It is generally thought that nanotechnology has the capacity to provide improved efficiency for machines and processes in every facet of life. However there is also concern that making things smaller may also introduce dangers and risks that we dont yet understand. As more and more materials are being produced at nano sizes, the need for understanding nano-toxicology increases. A good example is titanium dioxide, which is an ingredient in sunscreens. Sunscreen manufacturers are using titanium dioxide powders with smaller particle size because it is more transparent and more effective at blocking UV radiation from the sun. However if the particles are too small then they can pass through the skin and get into the blood stream. As yet, scientists dont fully understand the negative effects of having these nanparticles in the human body.
Over the next 20 to 30 years the development of nanotechnology is likely to exert more and more influence in the world of science and the impact will become more apparent in a relatively short space of time.


Some theoretical physicists, most notably Stephen Hawking and several super-string theorists have frequently foretold that we will soon know nature's ultimate natural law(s). That is, that the theoretical recipe book will finally be complete and can be printed with permanent, waterproof ink... This also was Einstein's dream during the final thirty years of his life, and there is indeed an aesthetic, almost magical attraction to the search for the "final formula"; one grand unified "theory of everything" (TOE), which ideally should be summed up so short to fit on a postage stamp, or at least a T-shirt. However, there is much to be said against such a dream, and the TOE today is less valuable than a pinkie:
Even if there is such a description of nature, which is far from certain, why should we - just today - be able to obtain complete knowledge of all the secrets of the universe, its origins and ultimate fate? We humans would never imagine that amoebas have a correct theory of everything, but if there are ancient civilizations that are just as far beyond us in development as we are above amoebas, how big is the probability that we will have the same TOE as they?
If an "ultimate" theory eventually is constructed, will it then be possible to test it through experiments and/or observations? This is, after all, the foundation of all exact science; it must be possible to test the theory and its predictions against nature itself, as the only raison d'être of the theory is its capacity to describe nature. It is in fact what is meant by the very concept of physics. But unless a major technological breakthrough is made, we will soon have reached the limits of how powerful particle accelerators we can manufacture.
Even though the known fundamental forces of nature - three or four, depending on how you count - are all relatively simple, it is almost always impossible to make detailed predictions of the behavior of even mildly complex systems starting from those laws. Is that a fundamental property of nature itself or simply a consequence of our theories being formulated in a manner which is far from ideal?
The concept of "building blocks" may be meaningless at the fundamental level. A reductionist approach - that things are made of smaller things - is subconsciously ingrained, particularly in the western world. However, it is possible that there may be principles of nature that differ from the hierarchical structure of the natural laws known today.
Physics is the science of nature, but our theories should be short, simple summaries of the complex world. A theory that can contain everything probably must be... nature itself! In recent years, particularly within chaos theory and quantum physics, one have also realized that it is very important to distinguish between the model (natural law) and what it models (the observed phenomenon). A theory which is of practical use will always miss some things because it otherwise would lose its entire reason for existing; which is to simplify the description. Therefore, a TOE, a definitive theory/formula that applies to the entire universe and everything in it, will probably forever be an unattainable dream.


The History of NASA
NASA (National Aeronautics and Space Administration) remains a government agency that still provides an amazing sense of wonder. Yet, in recent years, many do not see what NASA does as being all that spectacular. Perhaps NASA has done its job so well that many take the ability to travel through space completely for granted. During the early days of NASA, however, the American public consciousness was completely mesmerized at the ability of legendary astronauts to travel beyond the big blue marble we call earth.

NASA evolved from a previous Executive Branch office known as NACA (National Advisory Committee for Aeronautics). In many ways, the United States was caught somewhat sleeping when the Soviets launched Sputnik I into space in October of 1957. This caused major panic in the United States due to fear of space launched atomic weapons. A sense of national pride was at stake because the Soviets had defeated the USA to get to space first. This led to the legendary space race, the establishment of NACA and subsequently, NASA, under the leadership of president Dwight Eisenhower. 
July 29, 1958 saw Eisenhower sign the National Aeronautics and Space Act which led to the establishment of NASA. This was a huge undertaking and maintained an operational workforce of 8,000 personnel. NASA was heavily involved with the development of rockets and satellites for military and non-military use. NASA would, however, develop more lofty goals.

1958 saw NASA launch the development of Project Mercury which dealt with sending a man into space. While we take such things for granted today, in 1958 there was serious questions regarding whether or not a man could survive in space. The work developed through Project Mercury certainly delivered results. In May of 1961, Alan Shepard became the first American to travel in space when he operated the Freedom 7 in a suborbital mission. In February of 1962, John Glenn entered the history books as he orbited the earth building significantly on the accomplishments of Alan Shepard.
While the military component of NASA was always pronounced, the notion of human space exploration (Dubbed the final frontier by John F. Kennedy) is what truly captured the public's imagination. In particular, astronauts became true heroes to children as astronauts were as close to the classic image of the superhero that a human being could possibly become.
The next two major programs of NASA were Gemini and Apollo. Gemini was designed to develop and research missions to the moon. Gemini's work was quite successful as it set the stage for the Apollo moon mission. In 1969, Neil Armstrong became the first man to walk on the moon it what was clearly the watershed mark in the history of NASA.
In some ways, the legendary moonwalk was the pinnacle of success for NASA. The work of NASA continued on and would experience many more dramatic successes over the decades. This is why the legacy of NASA is an exalted one.


Magnetic flux is a measurable quantity of a MAGNETIC FIELD over a given area. Flux lines are often used to depict the direction of flux as it flows from the north to the south pole of a magnet. The greater number of these flux lines the greater the density of the flux and ultimately, the greater the magnetic force.
Generally, magnetic flux will take the easiest route from pole to pole. A ferromagnetic material, like mild steel for example, is an excellent conductor of magnetic flux, conversely, air is a poor conductor of magnetic flux and paramagnetic materials such as aluminum and brass for example, can be regarded similarly to air.
Air-gaps are used in designs and applications to encourage magnetic flux to take a particular route that would benefit the holding of the workpiece or load.
The use of air-gaps and pole pieces allows the focus of the magnetic flux where it is most effective and at the same time, remove or reduce flux from areas where it may become a nuisance.
The geometry of the pole and the amount of flux that it can carry to the workpiece has a measurable impact on the clamp force that can be generated.
Stray flux is regarded as "useless" in the sense that it's contribution to clamp force is negligible at best and as important, can become a potential burden.
Manufacturers of magnetic chucks and magnetic lifters design their products for a range of applications. The actual performance of the product can vary depending on how they are applied.
All users of these products must understand that the workpiece is an integral part of the overall circuit and magnetic flux will behave differently with dissimilar workpieces and how they are positioned over the poles.
Questions like "what happens to the chips?" and "will my cutting tool become magnetic?" are not uncommon when promoting a magnet as an alternative workholding device. These questions should not be underestimated.
As stated earlier, stray (or excessive) magnetic flux will not only impair cut performance but is also evidence of inefficient magnetic force.
The target for any magnetic workholder is where the flux emanating from the poles is totally absorbed within the workpiece.
A workpiece that is too thin will not be able to absorb all the magnetic flux made available by the chuck poles. This results in excess flux to the top of the workpiece which may attract (ferromagnetic) debris (chips, for example). In addition, this inefficiency causes the density of the flux at the pole/workpiece interface to reduce which will affect clamp force.
A workpiece can have an imbalance of the pole areas. Accordingly, the flux is not useful and contributes very little to holding force. In addition some of the stray flux will use the workpiece and possibly the cutting tool to find an alternative route. Simply sliding the workpiece to the correct area would improve holding force and minimize stray flux.