You are here

Science News

Bengaluru Wednesday, 28 November, 2018 - 18:03

In a recent study, researchers at the Indian Institute of Science, Bengaluru, have described how the pathogenic bacteria Salmonella, which causes a range of diseases from diarrhoea to typhoid, escapes from our immune system. The findings of this study, funded by the Department of Biotechnology and the Department of Atomic Energy, have been published in the journal PLoS Pathogens.

Salmonella is a smart bacteria that has many tricks in its kitty to evade the action of our immune system—the de facto protector of our body. Once they escape, these bacteria become very strong, making it difficult to treat them. Although Salmonella can attack many types of cells in our body, the dendritic cells—a kind of cells in our immune system—play an essential role in the infection. In this study, the researchers unfold the action of the bacteria in the dendritic cells.

The researchers used mice models in their experiments and demonstrated that in the dendritic cells, Salmonella enhances the expression of a protein sirtuin 2 (SIRT2). Although previous studies have explored the potential role of sirtuin 2 in other cases like oxidative stress, tumour growth and inflammation, its involvement in Salmonella infection was not yet known.

When the sirtuin 2 is produced in excess, a chain of events is initiated. Increase in sirtuin 2 enhances the production of nitric oxide in the cells by increasing an enzyme called nitric oxide synthase, which produces nitric oxide. Although nitric oxide has antibacterial properties, it weakens the immune system of the host by inhibiting the multiplication of T cells, an important member of the immune system.

“Being a suppressor of T cell proliferation as well as an antimicrobial agent, nitric oxide regulation can affect Salmonella infection in both positive and negative ways, respectively”, explain the authors.

However, not everything is bad! The researchers found a potential solution to the problems caused by sirtuin 2. They observed that by artificially inhibiting the action of sirtuin 2, the immune cells in the host could be made fit enough to clear the pathogens. “Our study demonstrates that sirtuin 2 deficiency lowers the systemic bacterial burden as well as prolongs survival of infected mice,” add the authors.

The researchers also believe that their findings might help design new strategies to control the Salmonella infection.

“Considering all the available data, administration of sirtuin 2 inhibitor or nitric oxide synthase 2 inhibitor, along with conventional antibiotics, might be helpful in clearing persistent infection”, conclude the researchers.

Section: General, Science, Health, News Source: Link
Bengaluru Wednesday, 28 November, 2018 - 01:24

“It’s difficult to make predictions, especially about the future”, goes a saying, although it is contested as to who said this. The essence of this saying is nowhere more true than in the field of population projections. For example, naively extrapolating the past rate of world population growth into the future starts to give absurd results very quickly. Yet, this has not deterred people from trying to project not only population growth but a host of associated social and economic parameters into the future.

Back in the 19th century, Thomas Malthus, an English scholar in economy and demography, argued that population increases exponentially, while food production can only grow linearly. It meant that eventually there would be too many hungry mouths to feed that our food production cannot match. Ultimately, the world would tend towards a ‘Malthusian catastrophe’. However, an increasing population also has its benefits. It can lead to growth in the economy due to the change in the age structure of the population. This positive outcome is called the ‘demographic dividend’.

By 2025, India’s population is expected to reach 1.5 billion. Will this increase produce a ‘demographic dividend’ or lead to a ‘Malthusian catastrophe’? Different methods for predicting India’s future population give radically different results. Now, researchers at the Shanghai University, China and the International Institute for Applied Systems Analysis, Austria, have developed a novel model that takes many other characteristics of the population into account, compared to other models, to make demographic projections.

Conventional models for population projection use the national age and sex distribution for their estimates. But, the population of a country does not consist of identical individuals producing the same number of offspring. The authors of the present study, published in the Proceedings of the National Academy of Sciences (PNAS), argue that such models suffer from two types of difficulties. They do not consider the heterogeneity of the population and do not take into account all possible factors that influence fertility rates.

In a vast country like India, fertility rates show significant regional differences. For instance, in the 1990s, the fertility rate in Kerala was 1.8 per woman while it was 5.1 in Uttar Pradesh. Applying the same model, but at the level of each state rather than nationally, yields a substantially higher national population because the high-fertility states produce a more offspring and hence gain higher weightage over time. This raises the question of the appropriate scale at which the model should be applied.

However, even among individuals of the same age and sex, other factors determine how many offspring they are likely to produce. One such element is the education level of an individual because educated women tend to produce fewer children. Explicitly considering education levels, and its projected increase with time, results in lower projected population growth as the proportion of educated women, with lower fertility rates, increases with time. Hence, the projection model needs to recognise factors whose inclusion will help improve predictions, while not burdening it with superfluous details.

In this study, the researchers propose a multi-dimensional model that projects future trajectories of fertility, mortality and migration based on current data and past trends. Besides, to address the issue of heterogeneity, the researchers consider the urban and rural populations of each of India’s states and union territories separately.

The results of the analyses show India’s population peaking between 1.66 billion and 1.8 billion by mid-century, before declining after 2070. If education enrollment were to remain constant at today’s levels, India would still have a significant population of uneducated people. However, in the last thirty years, the proportion of adult women without school education has fallen from 70% to 46%.

If education enrollment levels continued to increase, one would expect concomitant improvements in human development and economic growth, say the researchers. They also point out that broad-based education has many benefits ranging from poverty eradication and economic growth to health and well-being to quality of institutions and even democracy.

The study provides some insights into the implications of population growth for the future of India, and also for developing other population projection models around the world.

“We have shown how different degrees of accounting for measurable heterogeneity within populations changes the way in which we see the future”, say the authors.

Similar studies on population estimations in the past have had distinctly mixed results. One of the most well known is the “Second India” study commissioned by the Ford Foundation in the 1960s to examine the social and economic consequences of a projected doubling of the Indian population from 500 million, which it was then, to 1 billion. In retrospect, projections for food production turned out to be underestimates, whereas projections on education and poverty were too optimistic.

So, what does the future hold for India? How does India compare with its neighbour China, which also has a comparably large population? The researchers note that China is about three to four decades ahead of India because of its massive investment in universal education in the 1950s. The education pyramid of India looks similar to China’s in 1980, and the projected education pyramid for India in 2050 looks similar to China’s today.

“While cultural and institutional factors may differ between the two countries, and there can be no perfect analogy, this comparison makes it look likely that India will experience similarly rapid human-capital–driven development as China has over the past three to four decades”, concludes the study.

Section: General, Science, Society, Deep-dive Source: Link
Bengaluru Wednesday, 28 November, 2018 - 01:05

ভাৰতীয় কৃষি অনুসন্ধান পৰিষদৰ অন্তৰ্গত ৰাষ্ট্ৰীয় ধান গৱেষণা প্ৰতিষ্ঠান, কটকৰ গৱেষক সকলে তেওঁলোকৰ সহযোগী প্ৰতিষ্ঠানসমূহৰ লগত মিলি কৰা এক শেহতীয়া অধ্যয়নত পূৱ ভাৰতত উৎপাদিত হোৱা সাত বিধ ধানৰ মিথেন গেছ নিৰ্গমন কৰাৰ পদ্ধতি সমূহ পোহৰলৈ আহিছে। গৱেষণাটোৰ ফলাফল সমূহ চাইন্স অফ দ্যা টোটেল এনভাৰমেণ্ট  নামৰ পত্ৰিকা এখনত প্ৰকাশিত হৈছে।

মিথেন আৰু অন্যান্য গ্ৰীণ হাউচ গেছ সমূহে বায়ুমণ্ডলত উত্তাপ ধৰি ৰাখে আৰু গোলকীয় উষ্ণতা বৃদ্ধিত সহায় কৰে। মানব জাতিৰ দ্বাৰা পৰিচালিত এনে পৰিবেশ পৰিৱৰ্তন প্ৰক্ৰিয়াত মিথেন গেছে দ্বিতীয় বৃহত্তম কাৰক হিচাপে কাম কৰে। খাদ্য সুৰক্ষাত বিশেষ ভূমিকা থাকিলেও মিথেন গেছৰ নিৰ্গমক হিচাবে ধানৰ এক ঋনাত্মক ভূমিকাও আছে। পানীৰে উপচি থকা ধানখেতিৰ পথাৰে এনেকুৱা কিছুমান বেক্টেৰিয়াৰ বৃদ্ধিত সহায় কৰে যিয়ে মাটিত থকা জৈৱ পদাৰ্থ সমূহৰ পৰা মিথেন প্ৰস্তুতকৰনত সহায় কৰে। ধান খেতিৰ আৱৰ্জনা আৰু ধানৰ শিপাৰ পৰা নিৰ্গত হোৱা জৈৱ পদাৰ্থ সমূহেও মিথেন গেছৰ উৎস হিচাপে কাম কৰে। এইবোৰৰ উপৰিও ধান গছ সমূহে ইহঁতৰ দেহত থকা কৈশিক ছিদ্রৰ সহায়ত মাটিত থকা মিথেন গেছ বায়ুমণ্ডললৈ এৰি দিয়ে।

জীৱন চক্ৰৰ দৈৰ্ঘ্যৰ ওপৰত নিৰ্ভৰ কৰি গৱেষক সকলে এই অধ্যয়নটোৰ বাবে সাত বিধ ধানৰ প্ৰকাৰৰ নিৰ্বাচন কৰিছিল আৰু মিথেন গেছৰ নিৰ্গমনত এই ধানসমূহৰ অৰিহণাৰ বিশ্লেষণ কৰিছিল। তেওঁলোকে মিথেন নিৰ্গমনৰ হাৰ, শিপাই এৰি দিয়া পদাৰ্থ আৰু কাণ্ডত থকা এৰেনকাইমা নামৰ গেছ পৰিবাহী কলাত থকা ছিদ্রৰ ওপৰত অধ্যয়ন কৰিছিল।

মূল আৱিষ্কাৰৰ বিষয়ে বৰ্ণনা কৰি গৱেষক সকলে কয় যে, "মিথেন গেছৰ নিৰ্গমনৰ হাৰ এৰেনকাইমা কলাৰ অভিযোজন, শিপাৰ পৰা হোৱা ক্ষৰণ আৰু জৈৱ পদাৰ্থ উৎপাদনৰ হাৰৰ ওপৰত নিৰ্ভৰ কৰে” ।

গৱেষক সকলৰ মতে ওপৰোক্ত কাৰক সমূহ ধানৰ জীৱন চক্ৰৰ দৈৰ্ঘ্য আৰু ইয়াৰ অভিযোজনৰ লগত জড়িত হৈ থাকে। "চুটি জীৱন চক্ৰৰ ধানৰ প্ৰকাৰ সমূহে আটাইতকৈ কম মিথেন আৰু দীঘলীয়া জীৱন চক্ৰ যুক্ত ধানৰ প্ৰকাৰ সমূহে আটাইতকৈ বেছি মিথেন নিৰ্গমন কৰে। শস্য উৎপাদনৰ তুলনাত হোৱা মিথেন নিৰ্গমনৰ হাৰো চুটি আৰু মধ্যম দৈৰ্ঘ্যৰ ধানৰ প্ৰকাৰ সমূহত আটাইতকৈ কম,” গৱেষক সকলে জনায়।

ধানৰ দ্বাৰা মিথেন নিৰ্গমনৰ হাৰত থকা এই পাৰ্থক্যই প্ৰমাণ কৰে যে খেতিৰ বাবে উপযুক্ত জাতৰ ধানৰ নিৰ্বাচনে মিথেন নিৰ্গমনৰ সমস্যা হ্ৰাস কৰাত সহায় কৰিব ।

“পৰিবেশ, জীৱন চক্ৰৰ দৈৰ্ঘ্য আৰু কম মিথেন নিৰ্গমন কৰাৰ সম্ভাৱনাৰ ওপৰত নিৰ্ভৰ কৰি ধানৰ প্ৰকাৰ প্ৰস্তুত কৰাটো সম্ভৱপৰ আৰু এই জাতসমূহ গ্ৰীণ হাউচ গেছৰ নিৰ্গমন ৰোধ কৰাত অতি সহায়ক হ’ব,” গৱেষক সকলে জনায়।

Section: General, Science, News Source:
Vellore Tuesday, 27 November, 2018 - 17:50

How ready are India's to-be doctors to learn on their own and keep themselves updated with the newest knowledge in the medical field? If the results of a study, involving students of the Christian Medical College (CMC), Vellore, are considered, the answer is a sober ‘average’. The study, conducted by researchers at CMC and the University of Saskatchewan, Canada, was published in the journal BMC Medical Education.

Most of us would be familiar with learning as a school or college activity, where a prescribed syllabus is set for study. Once those years pass, learning anything new becomes optional. However, doctors need to learn throughout their life as the field of medicine is transforming each passing day. Hence, self-directed learning, where one plans, implements and evaluates his or her efforts independently, becomes paramount. With this perspective, the Medical Council of India (MCI) has also modified its syllabus to promote lifelong learning among medical students. However, is there a readiness among students? The study set to explore this question.

The researchers conducted a survey among 453 students and some faculty from CMC and analysed their responses. Their analyses revealed that the readiness for self- directed learning among students is 'average'. Those in their final year of the course were least ready compared to newly admitted ones. The study found that gender and age had no role in determining the spirit for self-learning. Although most students understood the importance of being self-directed, a few constraints were impacting their ability to become a better self-learner.

The medical course curriculum, regular assessments and cultural background of the students play a role in the readiness for self-directed learning, say the researchers. Today's medical programs are exam-oriented with a rigid curriculum, and there is a little room for self-learning.

“Students need assistance to improve their self-management skills to take control over his or her own learning especially in respect to time, resources and learning strategies due to the packed curriculum”, the researchers point out.

What approaches can then motivate students to learn better on their own? An interactive lecture or introduction to a new clinical case might help new students to expand their knowledge, suggest the researchers. Senior medical students could be motivated through practical case discussions, observing doctors serving in the clinics, or during their training days in the clinical wards, they say. The researchers also stress the need for modifying the existing teaching and learning activities to provide an environment for the students to polish and promote their self-learning abilities.

“Given the decline in self-directed learning readiness between batches of students from admission year to the final year of studies and its importance in medicine, the current curriculum may require an increase in learning activities that promote self-directed learning", conclude the authors.

Section: General, Science, Health, Society, News Source: Link
Vellore Tuesday, 27 November, 2018 - 09:56

Typhoid and paratyphoid fevers are life-threatening infections characterised by high fever, diarrhoea and vomiting. They are one of the most common diseases caused by water or food contaminated with the bacteria Salmonella typhi and Salmonella paratyphi. In India, about 494 children per 100,000, in the age group of 5-15 years, suffer from typhoid. The disease places a significant burden among young children.

In a recent review study, researchers from Christian Medical College (CMC), Vellore, All India Institute of Medical Sciences (AIIMS), New Delhi, TN Medical College & B Y L Nair (BYLN) Hospital, Mumbai, and Translational Health Sciences Technology Institute (THSTI), Faridabad, throw some insights into the recent trend of the disease. The study was published in The American Journal of Tropical Medicine and Hygiene and was supported by the Bill & Melinda Gates Foundation.

The researchers analysed the number of diagnosed cases of typhoid in hospitals at CMC, AIIMS, and BYLN for over 15 years (2000-2015). They also reviewed data during the same period on access to potable water, hygiene and sanitation, population density, and economic growth from nationally representative demographic health surveys and also data on antimicrobial use and antimicrobial resistance. They monitored water supply and sanitation reports by the United Nations Children’s Fund (UNICEF), the World Bank, and the Joint Monitoring Program.

The study found a decline in the cases of typhoid during the 15-year period.

“There appears to be a decline in the isolation of S. typhi in blood cultures, which is more apparent in the past 5 years, which can be temporally related to economic improvement, female literacy, and the use of antibiotics such as cephalosporins and azithromycin”, say the researchers.

The researchers observed that the three hospitals had similar numbers of S. typhi diagnosis. The percentage of definite S. typhi diagnosis declined from 0.62% in 2000 to 0.18% in 2015 at AIIMS, from 1.38% in 2000 to 0.17% in 2015 at CMC, and from 0.4% in 2010 to 0.2% in 2015 at BYLN, “This declining pattern may be confounded by several factors, including prior antimicrobial therapy, differing health seeking practices, changed diagnosis guidelines, and laboratory methods”, added the researchers.

The researchers then sought to find out if improved sanitation and access to clean water contributed to the reduction in typhoid cases. They found that sanitation improved from 49.3% in 1990 to 62.6% in 2015 in urban areas, and from 5.6% to 28.5 % during the same period in rural areas. There was also an increase in access to clean water. In 1990, 47% of the population in urban areas and only 6% in rural areas had access to clean drinking water. However, in 2015, this had risen to 54% and 16% respectively.

The number of people defecating in the open had also reduced from 653 million in 1990 to an estimated 569 million in 2015. With improved urban planning, the fraction of people living in the slums had decreased from 54.9% in 1990 to 24% in 2015. Besides, female literacy rate had almost doubled from a mere 33.7% in 1991 to 63.0% in 2015.

“Our review of the trends suggests that although commendable, the progress in access to safe water and sanitation is neither qualitatively nor quantitatively sufficient to explain the decline in typhoid across different settings in India. Therefore, other interventions such as hygiene education and regulating commercial food handling should be implemented to create an environment that provides an effective barrier to S. typhi infection,” remark the authors.

Would antibiotics not work, you ask? Well, in today’s world of antibiotic-resistant bacteria, S. typhi and S. paratyphi have been found to be susceptible to only three antibiotics—ampicillin, co-trimoxazole,  and chloramphenicol; although the number of resistant strains is evolving. The bacteria have already become resistant to commonly used antibiotics like nalidixic acid. “In future, the rampant, early, and indiscriminate use of antibiotics causing the emergence of drug-resistant strains may unmask clinical infection, resulting in a resurgence of disease at different locations in India making it a matter of great concern”, warn the researchers.

Although the study cannot establish a causal relationship between the decline in typhoid cases and the improvements in contextual factors, the researchers speculate that this declining trend may be a result of increasing per capita income, improved access to health care and early antibiotics.

“Increasing antibiotic resistance may result in a resurgence of disease unless effective public health interventions are rapidly and appropriately deployed,” concluded the authors, stressing the need for urgently improving drinking water and sanitation facilities in the country before there is a severe outbreak of enteric fever. 

Section: General, Science, Health, Society, Deep-dive Source: Link
मुंबई Monday, 26 November, 2018 - 22:33

भारतीय तंत्रज्ञान संस्था मुंबई येथे संगणक वापरून उपरोधिक विधाने ओळखण्याच्या विविध मार्गांचा अभ्यास करण्यात आला.

इंटरनेट ही जगातली सर्वात मोठी “सूचना देणारी यंत्रणा” म्हणावी लागेल. एखादी वस्तू ऑनलाईन विकत घेण्याआधी किंवा एखादा चित्रपट बघण्याआधी हजारो लोकांनी त्यावर लिहिलेल्या प्रतिक्रिया आपण बघतोच, नाही का? ऑनलाईन बाजारव्यवस्थेत या प्रतिक्रियांमुळे एखाद्या विक्रेत्याच्या महसूलावर परिणाम होऊ शकतो. म्हणूनच अशा कंपन्या आपल्या ग्राहकाची प्रतिक्रिया जाणून घेण्याची धडपड करत असतात. त्यासाठी या प्रतिक्रिया वाचून ग्राहक खूष आहे, रागावलेला आहे का नाराज झालेला आहे हे सांगणाऱ्या विविध यंत्रणा वापरल्या जातात. या यंत्रणांना ‘उपरोध’ ही भावना जाणून घेणे कठीण असते. भारतीय तंत्रज्ञान संस्था मुंबई येथील प्राध्यापक पुष्पक भट्टाचार्य यांच्या नेतृत्वाखाली काम करणाऱ्या एका गटाने संगणकाची मदत घेऊन ऑनलाईन लिखाणातून व्यक्त होणाऱ्या भावना ओळखून काढण्याच्या विविध मार्गांचे पृथक्करण केले.

एसीएम कॉम्प्युटर सर्व्हेज इथे प्रकाशित झालेल्या या विषयावरील शोधनिबंधाचे लेखक डॉ. आदित्य जोशी सांगतात, “लोकांना आपल्याबद्दल काय वाटते हे जाणून घेण्यासाठी मोठ्या संस्था किंवा संघटना (राजकीय, व्यावसायिक इत्यादी) समाजमाध्यमांचा (सोशल मिडीया) वापर करतात. कर्मचाऱ्यांच्या कामगिरीचा ताळेबंद, हॉटेलांमधले प्रतिक्रिया विचारणारे (फीडबॅक) फॉर्म्स इत्यादी ठिकाणी लिहिलेल्या मजकुरालादेखील ते लागू पडते.”  ऑनलाईन प्रतिक्रिया विविध स्वरूपाच्या असतात. काही थेट असतात. त्यातून ग्राहक खूष आहे का नाही ते स्पष्टपणे कळते. पण काही प्रतिक्रिया मात्र अशा असतात – “केवळ अप्रतिम ! हा डाय वापरून माझे केस अगदी मला हव्या तशा तांबड्या रंगाचे झाले !” पहिल्या नजरेत हे विधान वाचून ही चांगली प्रतिक्रिया असल्याचे मत होईल, अर्थात ही प्रतिक्रिया केस काळे करणाऱ्या डायसाठीची नसेल तरच. अभिप्राय वाचताना, वस्तू घेऊ इच्छिणाऱ्या व्यक्तीकडे हीदेखील माहिती असेल तर तिला या विधानातील उपरोध लक्षात येईल ! आता समजा, एखादा संगणक या विधानाचे विश्लेषण करायचा प्रयत्न करत असेल तर? त्याला हा उपरोध लक्षात येईल का? या विषयावर चालू असलेल्या अभ्यासानुसार तसे शक्य आहे. एखादा मजकूर उपरोधिक आहे किंवा नाही हे ठरवण्याच्या विविध पद्धती या अभ्यासातून समोर आल्या आहेत.

“भावनांचे विश्लेषण करणाऱ्या प्रणालींच्या दृष्टीने वाक्यातील उपरोध ही कायमची डोकेदुखी आहे. म्हणूनच उपरोध ओळखण्यासाठी केलेले संशोधन उपयुक्त आहे,” असे या कामामागील प्रेरणेविषयी बोलताना डॉ. जोशी म्हणतात. डॉ. जोशी, प्राध्यापक पुष्पक भट्टाचार्य आणि डॉ. मार्क कारमन या तिघांनी ‘इन्व्हेस्टीगेशन्स इन कम्प्युटेशनल सरकॅझम’ हे पुस्तक लिहिले आहे. त्यात संगणकीय उपरोधाबद्दल सांगोपांग माहिती दिलेली असून या संशोधनासमोरील आव्हाने आणि संधी याबद्दलही सांगितले आहे.

उपरोध ओळखण्याचे मार्ग  

माणसांप्रमाणेच संगणकदेखील उपरोधाची विविध उदाहरणे पाहून तो ओळखायला ‘शिकतात’. संगणकांना ते शिकण्यासाठी मदत करण्याचा एक मार्ग म्हणजे ट्विटरवर मोठ्या प्रमाणात उपलब्ध असलेला डेटा वापरून  #sarcasm किंवा #not अशा हॅशटॅगसहित केलेले ट्वीट्स हे उपरोधिक मजकूर असल्याचे ‘प्रशिक्षण’ देणे. मात्र ट्वीटर हे संपर्काचे केवळ एक माध्यम झाले. असे हॅशटॅग नसलेल्या पण  उपरोधिक मजकूर लिहिलेली इतर काही संकेतस्थळे असू शकतात.

म्हणूनच उपरोध ओळखण्याचा एक प्राथमिक मार्ग म्हणजे नकारात्मक वाक्यप्रयोग पण सकारात्मक क्रियापदे असलेले नमुने ओळखणे. उदाहरणार्थ, “पहाटे ४ वाजता डोकेदुखीनं जाग आलेली मला भारीच आवडते !” अशासारखे नमुने घेऊन उपरोध किंवा सांख्यिकी वर्गीकारकासाठी (स्टॅटिस्टिकल क्लासिफायर) त्यांचा चल (व्हेरीएबल) म्हणून वापर करणे. सांख्यिकी वर्गीकारक हे कार्य (फंक्शन) संगणकाला उदाहरणांचा एक संच पुरवून त्यातून उपरोधिक मजकूर कसा ओळखायचा याचे प्रशिक्षण देते. शोधनिबंधात विशद केलेल्या एका अभ्यासानुसार लॉजिस्टिक रिग्रेशन क्लासिफायर ही पद्धत वापरून एखादा मजकूर उपरोधिक आहे का नाही या प्रश्नाचे ८१% वेळा बरोबर उत्तर मिळते. एखाद्या वाक्यात काही ठराविक अटींची पूर्तता होते आहे का नाही ते बघायचे काम ही क्लासिफायर पद्धत करते.

असे नमुने गोळा करण्याची आणखी एक पद्धत म्हणजे चक्क मानवी भाष्यकारांची मदत घेऊन मजकूर उपरोधिक आहे किंवा नाही ते ठरवणे. विशेषतः जेव्हा मजकूर मोठा असतो आणि हॅशटॅगसारख्या गोष्टींची मदत नसते तेव्हा ही पद्धत वापरता येते. अर्थात भाष्यकारांची सांस्कृतिक पार्श्वभूमी वेगवेगळी असल्याने यात एक वेगळीच अडचण दिसून येते.

“उपरोध ओळखताना त्या त्या व्यक्तीच्या संस्कृतीचा कसा परिणाम होतो यावर आमच्या एका शोधनिबंधात चर्चा केलेली आहे. आम्ही अमेरिकन आणि भारतीय माणसांची तुलना करून उपरोधाबद्द्लच्या त्यांच्या कल्पना कशा वेगळ्या असतात हे बघितले,” असे डॉ. जोशी सांगतात. “बाहेर ऊन पडलंय आणि मी ऑफिसात आहे, वाह!” हे वाक्य अमेरिकन भाष्यकारांना उपरोधिक वाटले पण भारतीय भाष्यकारांना मात्र वाटले नाही. याचे कारण आहे, भारताचे हवामान.

कधीकधी माणसांनादेखील ईमेल्स आणि एसएमएसचा सूर नक्की काय आहे ते ओळखणे कठीण जाते.  “तुझे केस भारी आवडले हं मला!” असे कोणी उदगारले तर ते खरे आहे का नाही ते समजतेच असे नाही. मग संगणकाने तरी वाक्यातला हा सूर कसा ओळखायचा? काही वाक्ये उघडच झोंबणारी असतात. “आपल्याकडे लोकांनी दुर्लक्ष केलं की कसलं मस्त वाटतं!”  हे वाक्य क्वचितच सरळ अर्थाने घेता येते. याउलट हे वाक्य बघा , “सुट्टीचा दिवस गणितं सोडवत घालवायला काय मजा येते !” हे वाक्य गणित न आवडणाऱ्या लोकांसाठी उपरोधिक असले तरी ज्यांना गणित आवडते त्यांच्यासाठी खरे आहे. बऱ्याच वेळेस खरी मेख असते ती त्या वाक्यामागच्या संदर्भात. माणसांना जसा अर्थ लावण्यासाठी संदर्भ लागतो तसाच संगणकालाही लागतो. संगणकीय संशोधनात सध्या याच गोष्टीचा उपयोग करून घेण्याचे वारे वाहत आहेत. ट्विट्सच्या बाबतीतसुद्धा ते लिहिणाऱ्याने जर त्याला उपरोधिक म्हटले नसेल तर संदर्भ माहीत असणे आवश्यक ठरते. त्या विशिष्ट ट्वीटच्या आधी आणि नंतर येणारे ट्विट्स वाचून त्या विषयाबद्दल लेखकाच्या काय भावना होत्या याचा अंदाज येतो आणि संगणकाला त्या माणसाची विचारसरणी समजून घ्यायला मदत होते. मग एखाद्या व्यक्तीने “राजकारण्यांचं कधीच चुकत नाही. #politics” असे ट्विट केले असेल तर ते उपरोधाने आहे का नाही हे संगणकाला समजू शकते.

उपरोधाची विविध रूपे

उपरोधाचे विविध प्रकार ओळखणे, व्यंग आणि उपरोध यातील फरक ओळखणे आणि मूल्यमापन पद्धतीत भाषा आणि संस्कृतीची वैशिष्ट्ये  सामावून घेणे अशा विविध दिशा आता उपरोध ओळखण्याच्या प्रक्रियेत सापडू लागल्या आहेत. बहुतेक सर्व संशोधन इंग्रजी भाषेसाठी झालेले असले तरी इतर भाषांसाठीदेखील तशाच प्रकारच्या पद्धती वापरता येतील.

“उपरोध दर्शवणारे योग्य असे निर्देशक ओळखण्यावर मशीन लर्निंग (यंत्र शिक्षण) पद्धती आधारित आहेत. दुसऱ्या एखाद्या भाषेत तशाच प्रकारची माहिती किंवा संस्कृती उपलब्ध असेल तरच एका भाषेसाठीची पद्धत दुसऱ्याही भाषेला लागू करता येईल. मात्र उपरोध दर्शविणारा एकच सर्वसाधारण वर्गीकारक मिळू शकेल असा समज एक मिथकच म्हणावे लागेल.” असे डॉ. जोशी म्हणतात. संबंधित भाषेला आणि संस्कृतीला अनुसरून त्या वर्गीकारकासाठी लागणारा माहितीचा संच किंवा वर्गीकारकाची वैशिष्ट्ये बदलावी लागतील. “उपरोध ओळखण्याच्या पद्धती विविध संस्कृती आणि भाषांना लागू करणे ही पुढील कामाची दिशा असू शकते,” असे ते पुढे म्हणतात.

मात्र उपरोध ओळखण्यासाठी या सर्व प्रणालींना प्रशिक्षित करत असताना, त्या प्रणाली आपल्यावरच त्याचा उपयोग करतील हे कितपत शक्य आहे? आपोआप तसे काही होणार नाही असे डॉ. जोशी म्हणतात पण योग्य त्या परिस्थितीत उपरोधिक विधाने करण्यासाठी त्यांना प्रोग्राम करता येऊ शकते. सिरी, कोर्टाना किंवा खरेदीसाठी तयार केलेल्या वेबसाईट्सवरचे डिजिटल सहायक हे सगळे चॅटबॉट्स आहेत. म्हणजे माणसांसारखे बोलू किंवा वागू शकणारे संगणक प्रोग्राम्स. “बॉट्सना उपरोधिक वागण्या/बोलण्याचे प्रशिक्षण देणे शक्य आहे पण त्यांनी कायमच उपरोधिक वागून चालणार नाही. तेव्हा चॅटबॉट्स केवळ सहायकाचे काम करणार असतील तर कोणत्या प्रसंगी त्याने उपरोधिक वागणे गरजेचे आहे याचे मूल्यमापन सॉफ्टवेअर रचणाऱ्यांनी करणे आवश्यक आहे ! एखाद्या औपचारिक प्रसंगी चॅटबॉट उपरोधिक वागला तर ते चालणार नाही. मात्र, एखादा चिडलेला ग्राहक त्याच्याशी बराच काळ उद्धटपणे वागला तर तो बॉट एखादे उपरोधिक उत्तर देऊ शकतो. यावरून बॉटने केव्हा उपरोधिकपणे वागावे आणि माणसाने केव्हा उपरोधिकपणे वागावे यात बरेच साम्य आहे हे स्पष्ट होते.”

भावना ओळखणारी प्रणाली वापरून एखादे विधान सकारात्मक आहे का नकारात्मक ते सांगणे ठीकच आहे. मात्र तीच प्रणाली जेव्हा उपरोधही ओळखू लागते तेव्हा एखाद्या मजकुरामागची भावना काय आहे हे बरोबर ओळखण्यात ४% ने सुधारणा होते.

"भावनांच्या विश्लेषणाचे कोडे उलगडण्यात उपरोध ओळखण्यातील सुधारणेचा महत्त्वाचा वाटा आहे. भावनिक विश्लेषणाबद्दल असलेले शोधनिबंध सध्या  उपरोध ओळखण्याच्या क्षमतेबद्दल मौन बाळगून असतात. मात्र उपरोध ओळखणे शक्य झाले तर भावनिक विश्लेषण करणाऱ्या प्रणाली आणखी विस्तृत काम करू शकतील.” असे डॉ. जोशी सांगतात. 

Section: General, Science, Technology, Deep-dive Source:
Bengaluru Monday, 26 November, 2018 - 15:39

In a recent study, researchers from the Institute for Stem Cell Biology and Regenerative Medicine (InStem), the National Centre for Biological Sciences (NCBS), Bengaluru, and the University of Edinburgh, UK, have deciphered an exciting role of a human protein commonly found in the brain. The protein, called Fragile-X mental retardation protein (FMRP), plays a vital role in the development of cognitive functions.

Until now, it was known that FMRP, found in the liquid inside the cells, bound with ribosomes and regulate the production of some vital proteins. The loss of this protein leads to a genetic condition called Fragile X syndrome, leading to learning and cognitive disabilities. However, it is now known that this protein is also found in the nucleus of the cells. So, what is it doing in the nucleus?  This question got researchers behind the current study thinking.

The researchers found that FMRP in the nucleus interacts with specific RNA molecules called small nucleolar RNAs (snoRNAs), some of which are known to modify the ribosomal RNA by adding a methyl group to them. “Our work, for the first time, defines an interaction of an RNA binding protein, FMRP, with a specific subset of C/D box snoRNAs”, say the authors of the study published in the journal iScience and partially supported by the Department of Biotechnology.

“This interaction may have an influence in regulating rRNA methylation in humans. In the absence of FMRP, methylation was altered”, they note.

The addition of methyl group to ribosomal RNAs has biological significance since it contributes to heterogeneity or variations in the ribosomes. This variation might lead to different rates of synthesis of proteins in the cell. Since FMRPs also interact with ribosomes, can they recognise the methylations in ribosomes? “We found that FMRP recognizes ribosomes carrying specific methylation patterns on the rRNA”, say the authors.

The findings of the study indicate a potential link between the function of this protein in the nucleus, and in the cytoplasm of a cell, which might help in the regulation of protein synthesis.

“Our results identify a nuclear function of FMRP and imply that it can integrate translation regulation between the nucleus and cytoplasm,” conclude the authors.

Section: General, Science, Health, News Source:
Delhi Sunday, 25 November, 2018 - 23:23

India is the largest consumer of antibiotics in the world today, thanks to we popping pills to fight every single infection—be it bacterial or not! The ugly side of this reckless use of antibiotics is the increase in drug-resistant strains of bacteria that have learnt to outlive the effects of these drugs. What if we could detect the presence of bacteria early enough to treat them with drugs that actually work? In a new study, scientists from the Indian Institute of Technology Delhi (IIT Delhi) have developed a platform to detect bacterial growth using fluorescent carbon nanoparticles. This method, the researchers say, is very accurate and can quickly identify bacterial growth compared to existing ones.

Detecting bacterial growth is a common need in not just healthcare but a wide range of applications.

“Rapid bacterial growth sensing solves many problems in healthcare, food industry and the environment”, says Prof. Neetu Singh from IIT Delhi, who led the study. “For clinical diagnosis and treatment, rapid identification of bacteria can be critical. In the case of bacterial infections, a ‘point-of-care’ diagnosis, where test results are provided while you wait, would allow immediate commencement of antibiotic therapy and prevent the spread of the disease”, she explains.

The study, published in the journal Chemical Communications, and supported by the Department of Biotechnology, tested the use of tiny, fluorescent nanoparticles to detect bacterial growth. “Carbon dots are one of the members of carbon-based nanomaterials with size ranging from 2–10 nanometres”, says Prof. Singh. When bacteria respire, they produce acids. The acids reduce the pH and oxidise the carbon dots, causing changes on the structure of the particles. As a result, they emit a brighter fluorescent glow.

The researchers placed a few Escherichia coli (E. coli) bacteria and the fluorescent nanoparticles in a small gel microspheres that holds them together. Once the bacteria started to divide the carbon dots began to glow brighter. This change could be observed under a microscope, thus helping in the detection of bacteria in a sample. The change in intensity of fluorescence also helps in identifying whether the bacteria in the sample is resistant to antibiotics or not. Antibiotic-resistant bacteria caused brighter fluorescent emissions as they are growing while the non-resistant ones are still in lag phase and shows weaker fluorescence.

Existing methods to detect bacteria take about 10 to 48 hours to produce conclusive results and require large quantities of samples to detect bacterial presence accurately. The proposed method can identify bacterial growth much faster. “Our approach allows us to determine bacterial growth within 4 to 6 hours, which is currently unachievable with any other methods used clinically”, remarks Prof. Singh. The device can also detect bacteria that are resistant to common antibiotics.

The researchers have developed a prototype using their approach that has the carbon dots and the gel microspheres, to which bacteria can be added. It is currently in its preliminary stage and has been developed to assess the growth of bacteria and to test their susceptibility to antibiotics.

“It can be adopted after testing the specificity against different microorganisms before utilising it for clinical sample testing”, says Prof. Singh on plans for the future.

Innovations such as these can help in detecting and preventing the rapid spread of drug-resistant infections that kill thousands

Section: General, Science, Technology, Health, Deep-dive Source: Link
मुंबई Sunday, 25 November, 2018 - 22:43

अधिकांश आधुनिक जीवन इलेक्ट्रॉनिक उपकरणों के बिना अकल्पनीय है जैसे कंप्यूटर या मोबाइल फोन, जिस पर आप इस लेख को पढ़ रहे हैं, आपके बैठक कक्ष में रखा  टेलीविज़न , आपके रसोईघर में सूक्ष्म तरंग भट्ठी (माइक्रोवेव ओवन) इत्यादि ! लेकिन आने वाले समय में स्पिनट्रॉनिक उपकरण जल्द ही इलेक्ट्रॉनिक उपकरणों का स्थान ले सकते हैं, जो इलेक्ट्रॉनों की क्वांटम यांत्रिक गुण का उपयोग करते हैं जिसे स्पिन (चक्रण) कहते हैं। एक नए अध्ययन में, भारतीय प्रौद्योगिकी संस्थान (आईआईटी बॉम्बे) और टाटा मूलभूत अनुसंधान संस्थान (टीआईएफआर) के शोधकर्ताओं ने प्रदर्शित किया है कि ऊष्मा ऊर्जा को 'स्पिन धारा ' में परिवर्तित किया जा सकता है। उनका काम ‘एप्लाइड फिजिक्स लेटर्स’ पत्रिका के मुख पृष्ठ पर दर्शाया गया था।

1920 के दशक में पहली बार जर्मन वैज्ञानिकों ओटो स्टर्न और वाल्थर गेरलाच द्वारा इलेक्ट्रॉनों का यह स्पिन (चक्रण) गुण खोजा गया था। उन्होंने पाया कि जब एक बाह्य चुंबकीय क्षेत्र लगाया जाता है, तो इलेक्ट्रॉन ऐसे व्यवहार करता है जैसे कि यह अक्ष के चारों तरफ घूम रहा हो। लेकिन एक लट्टू के घूर्णन के विपरीत, इलेक्ट्रॉन का चक्रण, इलेक्ट्रॉन का अपना एक स्वाभाविक गुण है  जिसके किसी भी वास्तविक भौतिक घूर्णन के कारण होने की संभावना नहीं है। इलेक्ट्रॉनों को दक्षिणावर्त के लिए "स्पिन-अप" और वामावर्त घूर्णन के लिए "स्पिन-डाउन" कहा जाता है।

वर्तमान अध्ययन में शोधकर्ताओं ने नोबेल विजेता वाल्थर नर्नस्ट के नाम पर 'स्पिन नर्नस्ट प्रभाव' को प्रयोगात्मक साक्ष्य प्रदान किये हैं। 'स्पिन नर्नस्ट प्रभाव' एक सैद्धांतिक भविष्यवाणी है कि जब एक गैर-चुंबकीय पदार्थ के दोनो सिरों के तापमान में अंतर होता है, तो अलग-अलग चक्रण वाले इलेक्ट्रान ऊष्मा प्रवाह दिशा के लंबवत दिशा में पृथक हो जाते हैं।

‘एप्लाइड फिजिक्स लेटर्स’ पत्रिका के आवरण पृष्ठ पर प्रदर्शित अध्ययन के एक लेखक और आईआईटी बॉम्बे के प्राध्यापक तुलापुरकर कहते हैं-

"अर्धचालक चिप्स इलेक्ट्रॉनों की गति पर आधारित होते हैं जब इनपर विद्युत क्षेत्र लागू होता है जो कंप्यूटर, मोबाइल फोन इत्यादि जैसे इलेक्ट्रॉनिक उपकरणों में उपयोग किए जाते हैं। ये उपकरण केवल इलेक्ट्रॉन के आवेश और द्रव्यमान पर निर्भर होते हैं, और चक्रण को पूरी तरह अनदेखा कर दिया जाता है। स्पिंट्रोनिक्स (स्पिन + इलेक्ट्रॉनिक्स) में इलेक्ट्रॉनों के चक्रण  का पूरा उपयोग अधिक कार्यक्षमता और कम बिजली की खपत वाले उपकरणों को  बनाने में किया जाता है। "

आईआईटी-बॉम्बे में नैनो-फैब्रिकेशन (नैनो-निर्माण) सुविधाओं का उपयोग करके शोधकर्ताओं के प्रयोग में प्लेटिनम को गरम करना और स्पिन-अप और स्पिन-डाउन इलेक्ट्रॉनों का पृथक्करण (विभाजन) शामिल था। उन्होंने चक्रण का पता लगाने के लिए प्लैटिनम क्रॉसबार का उपयोग किया जो शीर्ष और निचली सतहों पर चुंबकीय धातु से लेपित था। जब प्लैटिनम बार (छड़) को इसके केंद्र में गरम किया गया तब प्लैटिनम के भीतर ऊष्मा के बहाव ने स्पिन-अप और स्पिन-डाउन इलेक्ट्रॉनों को अलग और विपरीत दिशाओं में स्थानांतरित कर दिया। तब ऊपर और नीचे टर्मिनलो के बीच के वोल्टेज में अंतर को इलेक्ट्रॉनों की गति के रूप में मापा जाता है।

आईआईटी बॉम्बे के शोध छात्र और पेपर के मुख्य लेखक अर्णब बोस इस शोध के महत्व को समझाते हुए कहते हैं ''ऐसा माना जाता था कि स्पिन धारा केवल लौहचुंबकीय पदार्थो में ऊष्मा या विद्युत प्रवाह लागू करके बनाया जा सकती है। हमने एक अग्रणी अवलोकन किया है कि ऊष्मा गैर-चुंबक में भी स्पिन धारा उत्पन्न कर सकती है। हमारा काम विभिन्न अनुप्रयोगों लिए महत्वपूर्ण है, और साथ ही यह मूलभूत भौतिकी के दृष्टिकोण से बहुत ही आकर्षक है।"

शोधकर्ताओं का मानना ​​है कि यह खोज अनुप्रयोगों की एक नई दुनिया दे सकते हैं जिसमें सुगठित डिजिटल डाटा भंडारण, ऊर्जा कुशल उपकरण शामिल हैं। मूल विचार यह है कि स्पिन-अप और स्पिन-डाउन स्थिति का उपयोग जानकारी (सूचना) को अंकित करने के लिए किया जा सकता है।

"एक मेमोरी सेल के तारों में चुंबकीयकरण की दिशा को ऊपर से नीचे या नीचे से ऊपर बदलना शामिल है। स्पिन धारा का उपयोग करके यह कार्य कुशलता से किया जा सकता है। अब स्पिन धारा स्पिन-नर्नस्ट प्रभाव का उपयोग करके बेकाम ऊष्मा (इलेक्ट्रॉनिक उपकरणों द्वारा छितराया हुआ ) से उत्पन्न किया जा सकता है। इस प्रकार स्पिन-नर्नस्ट प्रभाव चुंबकीय स्मृति को लिखने के लिए संभावित उपयोगों में काम आ सकता है।", कहकर प्रोफेसर तुलापुरकर ने अपनी बात पूरी की। 

Section: General, Science, Deep-dive Source:
Bengaluru Friday, 23 November, 2018 - 10:00

In 2009, a journalist named Christopher McDougall published a book called “Born to Run: A Hidden Tribe, Superathletes, and the Greatest Race the World Has Never Seen”. It is an odd combination of popular science, tirade against the modern running-shoe industry and a true story.

The true story was most readers’ introduction to the legendary native American tribe of long-distance runners, the Rarámuri. They live today in Mexico and are the titular “superathletes” who routinely run for hundreds of kilometres at a stretch to hunt and to play. Several long-distance runners from the USA, including McDougall, took part in a fifty-mile race against their Rarámuri counterparts, in a sprint through the Copper Canyons of Mexico. The story is nothing less than an uninhibited celebration of the human ability to run, its joy, its beauty, its spirituality.

The popular science, however, leaves a lot to be desired. McDougall is plainly a journalist with an eye for a story, and even more plainly, not a scientist. He explains a theory of human evolution, called the 'endurance-running theory', and strongly advocates it with little to no critical examination. As much as the book inspired me to run, it also piqued my curiosity about its critical scientific assertion—are human beings really evolved to run long distances?

From Africa to the World: The story of human evolution

Scientifically speaking, humans belong to the family Hominidae, subfamily Homininae, genus Homo and species sapiens. The origins of modern humans are regarded by consensus to lie somewhere in the Eastern and Southern part of the African continent, with one or more migrations out towards Eurasia, a theory called 'Out of Africa'. The most recent wave of emigrants left Africa about 70 to 50 thousand years ago, at a time when our close evolutionary cousins, the Neanderthals and Denisovans, both members of Homininae, still co-existed and interbred with early human species. Evidence for a third, intriguingly unknown species of Hominin named ‘Hominin X’ has been found from the DNA samples of the tribal populations of the Andaman and Nicobar islands. Approximately 2% of their DNA is unexplained in its origin, and it is hypothesised that this is accounted for by their mating with the mysterious Hominin X that has left few other traces behind.

This theory of humanity’s cradle of life has been contested by other hypotheses, though none have yet amassed enough evidence to replace the Out of Africa theory. For example, the ‘Kumari Model’ of human origins, posited by A.R Vasudevan, argues that humans first evolved on a now-submerged continent in the Indian Ocean known as Kumari Land. This argument is based on data from the National Geographic Genographic Project and asserts that humans migrated out of Kumari Land via two routes, one into Africa and another into Europe and Asia. However, whatever genetic evidence has been found for the relatedness of European and Indian populations can be explained by the OOA. Furthermore, the fact that Kumari Land was never above sea-level during this period has effectively rendered this theory pretty unlikely. 

The human family tree from the blog Filthy Monkey Men by Adam Benton with references to scientific papers therein

The tropical plains of Africa are still the most likely stage on which the drama of early human evolution occurred, and many tribes of hunter-gatherers still live there today. Some of these peoples practice what could be the closest living approximation of the early human lifestyle, and one such example are the !Kung people of central and southern Africa. Many are now farmers, but a good fraction of the population still practices the nomadic lifestyle of their tradition, though the advance of modernity is continuously eroding this. Can their hunting practices throw some light on the viability of the endurance-running theory?

Running to a bigger brain?

The endurance-running theory posits that the human ability to run sustainably long distances shaped the course of our evolution over the last couple of million years, at least in part by creating a unique hunting possibility. Most large mammals easily outpace humans during short bursts of running (sprints), but almost none can keep it up for longer than a few minutes. By chasing prey over long distances without a break, humans could have forced their quarry to overheat and collapse, or at least slow them down and weaken them enough to be killed quickly. This practice of “running an animal to death”, known formally as “persistence-hunting”, would have brought a steady supply of nutrient-rich meat into the early hunter-gatherer diet, as argued by Bramble and Lieberman. It is still practised by a few hunter-gatherer societies today, such as the Kalahari bushmen of Africa and, of course, the Rarámuri of Mexico.

One implication of Bramble and Lieberman’s theory is that persistence running may also have contributed to the evolution of the human brain over this time in our species’ history. Persistence-hunting is an intrinsically social activity, requiring cooperation and communication between members of the hunting band, and this active sharing and social aspects could have shaped cognitive evolution. The skills would likely be taught to younger generations by both word of mouth and practical guidance, further increasing communication and social complexity.

In effect, tracking and anticipating the movements of prey would have required higher-level cognitive abilities and selection for such might have affected brain evolution. The aerobic activity of running itself is very conducive to the development of new nerve cells. It increases the baseline levels of proteins, such as neurotrophin and growth factors that contribute to the growth and development of new nerve cells (neurons). Finally, the steady supply of calorically dense meat could feed an energy-hungry brain, promoting better hunting skills, thereby a higher quantity of meat for still larger brains, leading to even better hunting skills, and so on.

The recent study of the !Kung people addressed a question central to the plausibility of the endurance-running hypothesis and its implications for brain evolution—is persistence hunting indeed energetically-rewarding enough to have conferred a selective advantage in human evolution?

The energetics of modern persistence-hunting: Is it all worth it?

The !Kung people live in modern-day Namibia, Botswana and Angola, in an environment likely similar to the one from which early humans emerged, making their society an excellent one to study persistence-hunting in real life. In a study, the Energy Return on Investment (EROI)—the ratio of the energy gained on completing a process to the energy invested into the same process—was calculated from the persistence-hunting practice of the !Kung people.

The Greater Kudu (Tragelaphus strepsiceros) is a species of antelope and game animal hunted by the !Kung people. The authors of the above study calculated the EROIs generated by a typical hunt for a small, average and large Kudus by comparing the energetic cost of the chase to the calorific gains from the meat of the carcass. This metric was calculated as the energy gained from the Kudu (if it were eaten) multiplied by the success rate of a hunt, and divided by the energy invested by the hunters. To calculate the number of days a single carcass could sustain a family, the energy spent in the hunt was subtracted from the total amount of energy that a Kudu could yield if eaten, and finally divided by the average daily energy expenditure of the hunters and their families.

The results were clear—the EROI was an enormous ratio, with a return of energy at 26:1 to 44:1 for a small Kudu (meaning that for every unit of energy invested in hunting it, a small Kudu yielded 26 to 44 units), 34:1 to 57:1 for an average Kudu and 41:1 to 70:1 for a large Kudu. These returns are enough to support a family for 6.7 to 11.2 days—undoubtedly worth the investment to run an antelope to death!

A body built to run?

Humans are indisputably poor sprinters. The fastest footspeed ever attained by a human is 44.72 km per hour—a speed easily surpassed by cheetahs, greyhounds, horses and other antelopes running at less than their maximum speeds. In contrast, it is just as indisputable that humans are one of the best endurance-runners on Earth. Though our speed is far below the top galloping speed of horses, we can outrun horses over very long distances.

The Rarámuri can run over 300 kilometres without stopping, and the urban humans practice their own form of recreational endurance-running through marathons. Indeed, history was made on the 16th of September this year, when Eliud Kipchoge of Kenya broke the world record for the fastest ever marathon, completing a run of 42.2 kilometres in 2 hours, 1 minute and 39 seconds in Berlin. Elite human runners even compete in ultramarathons through harsh terrains, such as the Leadville Trail 100 Run—a 160 kilometre run through the Rocky Mountains of North America, and the Hell Race—a series of gruelling runs through the Himalayas. The popularity of running as a recreational activity is behavioural evidence of our ability to run, but far stronger evidence comes from our anatomy and physiology.

Humans are bipedal—we walk on two limbs unlike quadrupeds, which walk on four. Bipedalism is an inherently unstable way to propel the body forward. When running, the body’s centre of mass (the point around which the mass of the whole body is balanced) is swung over the stride of the extended leg and the kinetic energy of the body changes. The tendons and ligaments of the legs and the arch of the foot absorb energy during the initial foot-strike and then recoil like springs to release this strain-energy during the second part of the stride, the propulsive phase. The foot-strike is the part of a stride that carries the most impact, transmitting forces up to 3-4 times the body weight through the entire skeletal system. The joints of the lower half of the human body, including the femoral head, where the thigh bone meets the pelvis, and the knee joint, have substantially larger surface areas than comparable anatomical regions in the fossil cousins and ancestors of modern humans, which could help absorb this impact.

Anatomy of the human leg (via Wikimedia Commons)

The arches of the human foot (image obtained via Wikimedia Commons and modified)

The Achilles tendon, which connects the heel and calf muscles (plantar flexors) is particularly important as a spring-mechanism, storing and releasing energy. It is notably absent in other species of great apes that can walk but not run. Indeed, the skeletal correlate of the Achilles tendon, the calcaneal tuber, is shorter in both modern and early humans compared to Neanderthals, increasing the spring’s efficiency. The plantar arch of the foot is another critical spring-like structure coupled with a medial flange projecting over the proximal cuboid bone of the foot and thus working to restrict the rotation between the anterior and posterior part of the foot, giving the foot an energy-saving bounce.

Stabilization during running is crucial as, unlike walking, at certain times no part of the body has any contact with the ground. The initial rotation of the trunk, therefore, has no counterbalancing rotational forces generated by contact with the ground. The corresponding twist comes from the body itself—a narrow waist that permits rotation of the trunk relative to the hips; broad shoulders that counterbalance the movements of the arms; and greater structural independence of the shoulders and head which permits unimpeded counter-rotations of the arms and shoulders. The structural autonomy of the pectoral girdle (shoulders) and head requires a stabilising structure. The nuchal ligament which runs between the back of the skull and the upper trapezius muscle, stretching along the upper part of the spine over the shoulder blade serves to help here. These structures are also notably absent in other species of great apes.

If persistence-hunting is to bring down prey, it is essential that the predators themselves do not overheat. Humans have very little fur on their bodies and have a dense concentration of sweat glands in the skin, a couple of hundred per cm2. Sweating is a far more efficient way of cooling than panting, which is the evaporation of water from the smaller surface area of the mouth and lungs. Panting is distinguished from mouth-breathing—another human behaviour that makes endurance running easy. The process of combined mouth- and nose-breathing and the rate at which it happens, is also decoupled from the mechanics of locomotion.

In most running quadrupeds, the synchronised movement of the diaphragm and visceral organs during running works to push air in and out of the lungs, so running speed is closely tied to the respiratory rate at a 1:1 ratio. Hence, they cannot increase their running speed without increasing their respiratory rate. Further, due to the necessity of panting through the mouth, there is a point of trade-off where greater running speed, requiring greater heat dissipation, does not allow for fast enough panting.

The metabolic Cost Of Transport (COT) is the efficiency with which energy is used to transport a body across distances. This metric increases with the speed of running in a linear fashion as a function of body mass. Though humans have a 50% higher COT than a typical mammal, an increase in human running speed has a minimal effect on COT. This favourable trade-off between speed and metabolic efficiency leading to potentially large calorific gains through greater hunting success could plausibly have given endurance-running a selective advantage.

While anatomy and physiology are the classical evidence cited in favour of the endurance-running theory, recent experiments in mice may be opening up a whole new line of genetic inquiry. Humans and chimpanzees share 98% of their DNA, and the gene CMAH (CMP-Neu5Ac Hydroxylase) was one of the first genetic differences to be identified between them, dated to about 2-3 million years ago. Chimps possess a functional copy of this gene, i.e., it codes for a protein, but the humans possess a loss-of-function version of the same gene. Now researchers at the University of California have taken a lineage of mice with this same loss-of-function version of the gene and studied their running abilities. Not only did they find greater running endurance in these mice, but a greater resistance to muscle fatigue, a greater consumption of oxygen by the muscles (necessary to maintain activity), and an increase in the presence of certain metabolic products, indicating changes in the underlying pathways. This is made all the more fascinating when one remembers that it is humans, and not chimps, that are the runners among the great apes.

Just a just-so story?

Elegant as the persistence running theory is, many remain unconvinced by it mainly because most of it, if not all, could be reduced to a ‘just-so’ story. H. sapiens are indisputably and uniquely suited to long-distance running, but other links in the theoretical chain have been questioned. Did bipedalism itself evolve as a consequence of running? Did endurance-running make persistence-hunting or scavenging possible? Did the physical traits cited as specialised for endurance-running evolve under some other selective pressure and subsequently got co-opted?

Some features such as an abundance of spring-like tendons in the leg and well-developed gluteus maximus muscles are indeed more useful during running than walking. Others, such as joints with larger cross-sectional areas, trunk rotation and a pectoral girdle decoupled from the head make carrying and transporting heavy weights possible. Efficient control of body temperature, bipedalism, proportionally longer legs and the plantar arch are just as useful for walking long distances in the sun. Hence, some researchers opine that those features may have been selected earlier for that function instead.

Paleo-archaeological studies have offered some vital evidence against the persistence-hunting hypothesis. If early hominins did indeed run their prey to death, they would disproportionately kill the animals most likely to collapse first—the young and old that could not move fast enough to escape and pregnant females more likely to overheat. When this prediction was tested against bovid fossils found in the Olduvai Gorge in Tanzania, where early humans had hunted, the opposite of what persistence-hunting would predict was found! There was a disproportionate abundance of fossils from prime adults, as opposed to those from weak, sick and old animals. Also, the early Homo is inferred to have lived in the savannah-woodlands with compact soil, ill-suited to preserve tracks and dense vegetation that does not permit prey to be spotted from a distance. Persistence-hunting works best under arid conditions on sparsely-vegetated plains, and this is supported by the comparative rarity with which modern hunter-gatherers practice it, further limiting it only to hot, open environments.

In an argument that strikes closer to the heart this theory of human brain evolution, some studies point out that tracking for persistence-hunting is an immensely sophisticated cognitive activity. While wolves and lions also hunt in packs, requiring similar cognitive skills, they do not use weapons and rely on a sense of smell that humans do not possess. Sophisticated hunting-related cognition would already have required the enormous brain that persistence-hunting is said to have made possible. Homo is unlikely to have had such capabilities early in its evolution. The persistence-hunters of today, like the Kalahari hunters, have characteristic large modern human brains, not to mention complex language. They also have far more efficient weapons, whose rudimentary precursors appear only late in the fossil record, well after endurance running is thought to have evolved.

A few tentative conclusions

The image of a band of brothers, burgeoning brains hungry for meat and effortlessly chasing down a panting antelope is undoubtedly a romantic one, but not necessarily the basis for a sound theory of human evolution. As with most complex issues, the truth probably lies somewhere between the extremes. Endurance-running may well have been the ‘unintended’ consequence of multiple traits that were initially selected for another purpose or purposes. While it seems unlikely that human intelligence is a direct consequence of persistence-hunting, it is clear from the !Kung people that the practice can be a huge advantage under the right circumstances. These data, coupled with those from paleo-archaeological studies, raise the possibility that the cause and effect relationship may be the other way around! Perhaps, it was the combination of a body selected to walk or carry weights, and growing human brain that made persistence-hunting possible, not only by improving tracking abilities but enabling hunters to seek out environments suited to endurance-running?

Section: General, Science, Health, Society, Friday Features, News+Views, Featured Source:

Pages