Sunil Jain, managing editor of The Financial Express passed away at 58 due to post-covid complications

Image
Sunil Jain, managing editor of The Financial Express Sunil Jain, Managing Editor of Financial Express, passed away following post-Covid complications on Saturday. He was 58. Veteran journalist, columnist and Managing Editor of The Financial Express Sunil Jain has died due to COVID-19-related complications, his sister Sandhya Jain announced on May 15. Jain was admitted to the All India Institute of Medical Sciences (AIIMS) in Delhi after he had contracted the coronavirus infection. "Doctors & staff at AIIMS battled heroically, but the demon was too powerful," his bereaved sister tweeted. “My brother, Sunil Jain, passed away this evening after post-Covid complications. He suffered a cardiac arrest earlier in the day, but was revived, and finally passed after another cardiac arrest around 8.30 p.m. The doctors and all medical staff at AIIMS did their best and more. I thank you for standing by us in this dark hour,” his sister Sandhya Jain said in a statement. PM Narendra Mod

FACEBOOK'S AI ROBOTS SHUT DOWN AFTER THEY STARTED TALKING TO EACH OTHER IN THEIR OWN LANGUAGE

FACEBOOK'S AI ROBOTS SHUT DOWN AFTER THEY STARTED TALKING TO EACH OTHER IN THEIR OWN LANGUAGE


In 2017, facebook had to stop one of its experiments when two artificial Intelligent robots started talking to each other in a language that only they understood. The language was created by them to simplify things but was incomprehensible to humans.

The robots were prepared to make a negotiation over an experimental trade of ordinary objects like hats, balls etc. Also they were instructed to improve during the process that made them even better than they were before the experiment started.

The robots had a learning algorithm that allows them to learn anything different that happens while the experiment. For instance, imagine a MMA fight where a robot that has some defined moves fights a human. The robot there is a machine that would only follow the set of instructions coded. There is less or no chance of a robot winning in front of a trained fighter this way but if the robot happens to be a learning robot then it can learn to copy the opponent and learn the opponent's moves. Hence, what happened in facebook is not a surprising thing to happen because the learning robots are designed in a manner that they tend to find or generate ways to optimize the task which makes us understand what exactly happened. Moreover, the robots even after using their alien language were able to make a successful negotiation.

The names of the robots were Bob and Alice. Below are a few instances to understand how they actually interacted and what is so “alien” about it:

Bob: i can i i everything

Alice: balls have a ball to me to me to me

The statements given are beyond human understanding rather they use words from english language only which brings us to a conclusion that the robots created a shorthand just like humans do. The chatbots were negotiating in a way that was near to human approach, for instance they would pretend to show interest in one object and give it up that later making an impact that they are making a sacrifice.

A similar incident happened in 2016 with Microsoft’s chatbot Tay which was exposed to Twitter and social web. It was a machine learning project designed for human engagement. Tay started to post racist comments on Twitter and ultimately Microsoft had to shut it down stating that “As it is learned, some of its responses are inappropriate. We are making some adjustments”. What might have happened was that as Tay was exposed to social media, it was repeating statements by other users, to engage them in the conversation and as the company did not implement any automated filters on specific terms, the bot was using racist labels and other common expletives.

Readers may also find related articles of scenarios with Google and Nikon, where Google robots identified Africans as gorillas and Nikon face detection when used on Asian subjects gave a message “are they blinking?” which concludes that machine learning is a field with many points to cover.

What is machine learning and what are the possible problems that arise in machine learning?

Machine learning is a part of artificial intelligence that uses techniques to make a machine learn without using any programming. An algorithm that makes a program accurate in predicting outcomes without being further programmed. Machine learning can be done by:

Visual Object Detection: Given natural photographs (images from the

web), and a target object class such as “person” or “car”, we want to

build a system that will identify the objects of that type in the photographs

and give their approximate locations. We consider the case where training

data is given as pictures annotated with bounding boxes for the objects

of the desired class.

Open Domain Continuous Speech Recognition: Given a sound wave of

human speech recovers the sequence of words that were uttered. Training

data is taken to consist of sound waves paired with transcriptions such

as in closed caption television together with a large corpus of text not

associated with any sound waves.

Natural Language Translation: Given a sentence in one language translate

it into another language. The training data consists of a set of translation

pairs where each pair consists of a sentence and its translation.

The Netflix Challenge: Given previous movie ratings of an individual

predict the rating they would give to a movie they have not yet rated.

The training data consists of a large set of ratings where each rating is a

person identifier, a movie identifier, and a rating.

There can be a number of problems that might occur. Some are listed below:

Societal bias: An artificial intelligent software reflects how its creators are biased. Societal bias is a trait of a person or group with distinct traits and is a stubborn problem that has bothered humans since the dawn of civilization.

Sparse text data: Machines can work on and understand small text data but when a scattered data is given to a machine the results are not accurate as they can be with small meaningful data. For eg. Language modelling of small data like a tweet is easier than compared to a document.

Difficulty in interpretation of semantics and syntactics of a language: It is difficult for a bot to understand the difference in syntax and semantics of a language as rules and conventions for every language are different and it becomes difficult for the bot to decide what rules to follow while translating. Moreover, they cannot interpret sarcasm in any sentence due to the high semantic difference in words. For eg. Given a quote “Have you ever listened to someone for a while and wondered.. “Who ties your shoelaces for you?

Here the sarcasm would be incomprehensible to the human brain too. Therefore, we cannot expect a machine to understand it as a sarcasm rather it would answer a “yes” or “no” for the asked question in the quote.

Going out of context: The bots keep on learning from previous messages or texts but in some cases it may deviate from the real context. For instance, when we type a message in Google Translator, it may or maynot give the exact conversion of the given sentence because of the difficulty in understanding the set of rules for every language.



Read More On Dust Facts

dustfacts.blogspot.com


Comments

Popular posts from this blog

Learn How To Register On cowin.gov.in For Vaccination In Few Steps