Blog

Daily Current Affairs 04.05.2023 ( April PMI signals services sector had best expansion in 13 years , ‘Number of candidates with criminal cases up from last polls in Karnataka’ , The EU’s Artificial Intelligence Act , ‘Adapting to climate change to cost India ₹85.6 lakh crore by 2030’ )

Daily Current Affairs 04.05.2023 ( April PMI signals services sector had best expansion in 13 years , ‘Number of candidates with criminal cases up from last polls in Karnataka’ , The EU’s Artificial Intelligence Act , ‘Adapting to climate change to cost India ₹85.6 lakh crore by 2030’ )

2-2

1. April PMI signals services sector had best expansion in 13 years

India’s services sector recorded its highest uptick in new business and output levels since June 2010 this April, led by a strong upturn in the finance and insurance segment, as per the seasonally adjusted S&P Global India Services PMI Business Activity Index, which rebounded from a decline to 57.8 in March to 62 in April.

A reading of 50 on the index indicates no change in business activity levels.

New export orders expanded for the third month in succession and at the fastest pace over this period. However, job creation remained negligible in April and input cost inflation, which had hit a two-and-a-half-year low in March, surged to a three-month high, with firms reporting a rise in costs on food, fuel, medicine, transportation and wages.

Consumer services firms recorded the fastest upturn in average expenses, as per the survey-based monthly indicator, but all services sectors raised selling prices at the fastest pace in 2023, with the most acute price hikes undertaken by transport, information and communication businesses.

Business confidence which had plummeted to an eight-month low in March, revived a bit in April. Close to 22% of surveyed companies forecast growth of business activity over the course of the coming 12 months, compared with 2% that anticipate a reduction, S&P Global said.

“India’s service sector posted a remarkable performance in April, with demand strength backing the strongest increases in new business and output in just under 13 years,” noted Pollyanna De Lima, economics associate director at S&P Global Market Intelligence. “One area of weakness highlighted in the latest results was the labour market. Despite the substantial pick-up in sales growth and improved business sentiment towards the outlook, the increase in employment seen in April was negligible and failed to gain meaningful traction,” she added.

Outstanding business volumes increased for the sixteenth straight month, but rose “only marginally” in April, reflecting the slowest growth in sixteen months.

The services sectors’ performance lifted overall output in India’s private sector to the highest level since July 2010, with aggregate sales also rising at the fastest pace in almost 13 years. The S&P Global India Composite PMI Output Index rose from 58.4 in March to 61.6 in April. The S&P Global Manufacturing PMI had risen to a four-month high of 57.2 in April.

“Despite the substantial upturn in sales, job creation across the private sector remained mild. Rates of expansion were broadly similar at manufacturing firms and their services counterparts,” S&P Global underlined.

2. ‘Number of candidates with criminal cases up from last polls in Karnataka’

It has gone up from 83 to 96 among BJP candidates, 59 to 122 among Congress candidates, and 41 to 70 among Janata Dal (Secular) candidates between 2018 and now, according to the Association for Democratic Reforms report

The number of candidates with declared criminal cases in Karnataka has increased in all three major political parties between the 2018 Assembly elections and now.

The number of cases has gone up from 83 to 96 among BJP candidates, 59 to 122 among Congress candidates, and 41 to 70 cases among JD(S) candidates, revealed the report ‘Karnataka Assembly Elections 2023: Analysis of Criminal Background, Financial, Education, Gender, and other Details of Candidates’ released by the Association for Democratic Reforms (ADR) on Wednesday.

Eight candidates who have declared murder-related cases, 35 candidates who have declared attempt to murder cases, and 49 candidates who have declared cases of crime against women are contesting in this year’s Assembly elections. Among the candidates who have declared cases of crime against women, one is facing a rape case.

Congress candidates have a higher number of criminal cases against them, including serious criminal cases. While 55% of Congress candidates have declared criminal cases, 31% have serious criminal cases.

In the BJP, 30% of the candidates have serious criminal cases while in the JD(S) 25% of the candidates have declared serious criminal cases.

The ADR has recommended that there should be permanent disqualification of candidates convicted of heinous crimes such as murder, rape, smuggling, dacoity, and kidnapping.

3. The EU’s Artificial Intelligence Act

What are the stipulations mentioned in the new draft document of the European Union’s AI Act? Why are AI tools often called black boxes? What are the four risk categories of AI? How did the popularity of ChatGPT accelerate and change the process of bringing in regulation for artificial intelligence?

The story so far:

After intense last-minute negotiations in the past few weeks on how to bring general-purpose artificial intelligence systems (GPAIS) like OpenAI’s ChatGPT under the ambit of regulation, members of European Parliament reached a preliminary deal this week on a new draft of the European Union’s ambitious Artificial Intelligence Act, first drafted two years ago.

Why regulate artificial intelligence?

As artificial intelligence technologies become omnipresent and their algorithms more advanced — capable of performing a wide variety of tasks including voice assistance, recommending music, driving cars, detecting cancer, and even deciding whether you get shortlisted for a job — the risks and uncertainties associated with them have also ballooned.

Many AI tools are essentially black boxes, meaning even those who designed them cannot explain what goes on inside them to generate a particular output. Complex and unexplainable AI tools have already manifested in wrongful arrests due to AI-enabled facial recognition; discrimination and societal biases seeping into AI outputs; and most recently, in how chatbots based on large language models (LLMs) like Generative Pretrained Trasformer-3 (GPT-3) and 4 can generate versatile, human-competitive and genuine looking content, which may be inaccurate or copyrighted material.

Recently, industry stakeholders including Twitter CEO Elon Musk and Apple co-founder Steve Wozniak signed an open letter asking AI labs to stop the training of AI models more powerful than GPT-4 for six months, citing potential risks to society and humanity. “Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable,” the letter said. It urged global policymakers to “dramatically accelerate” the development of “robust” AI governance systems.

How was the AI Act formed?

The legislation was drafted in 2021 with the aim of bringing transparency, trust, and accountability to AI and creating a framework to mitigate risks to the safety, health, fundamental rights, and democratic values of the EU. It also aims to address ethical questions and implementation challenges in various sectors ranging from healthcare and education to finance and energy. The legislation seeks to strike a balance between promoting “the uptake of AI while mitigating or preventing harms associated with certain uses of the technology”.

Similar to how the EU’s 2018 General Data Protection Regulation (GDPR) made it an industry leader in the global data protection regime, the AI law aims to “strengthen Europe’s position as a global hub of excellence in AI from the lab to the market” and ensure that AI in Europe respects the 27-country bloc’s values and rules.

What does the draft document entail?

The draft of the AI Act broadly defines AI as “software that is developed with one or more of the techniques that can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with”. It identifies AI tools based on machine learning and deep learning, knowledge as well as logic-based and statistical approaches. The Act’s central approach is the classification of AI tech based on the level of risk they pose to the “health and safety or fundamental rights” of a person. There are four risk categories in the Act — unacceptable, high, limited and minimal.

The Act prohibits using technologies in the unacceptable risk category with little exception. These include the use of real-time facial and biometric identification systems in public spaces; systems of social scoring of citizens by governments leading to “unjustified and disproportionate detrimental treatment”; subliminal techniques to distort a person’s behaviour; and technologies which can exploit vulnerabilities of the young or elderly, or persons with disabilities.

The Act lays substantial focus on AI in the high-risk category, prescribing a number of pre-and post-market requirements for developers and users of such systems. Some systems falling under this category include biometric identification and categorisation of natural persons, AI used in healthcare, education, employment (recruitment), law enforcement, justice delivery systems, and tools that provide access to essential private and public services (including access to financial services such as loan approval systems). The Act envisages establishing an EU-wide database of high-risk AI systems and setting parameters so that future technologies or those under development can be included if they meet the high-risk criteria. Before high-risk AI systems can make it to the market, they will be subject to strict reviews known in the Act as ‘conformity assessments’— algorithmic impact assessments to analyse data sets fed to AI tools, biases, how users interact with the system, and the overall design and monitoring of system outputs. It also requires such systems to be transparent, explainable, allow human oversight and give clear and adequate information to the user. Moreover, since AI algorithms are specifically designed to evolve over time, high-risk systems must also comply with mandatory post-market monitoring obligations such as logging performance data and maintaining continuous compliance, with special attention paid to how these programmes change through their lifetime.

AI systems in the limited and minimal risk category such as spam filters or video games are allowed to be used with a few requirements like transparency obligations.

What is the recent proposal on general purpose AI like ChatGPT?

As recently as February this year, general-purpose AI such as the language model-based ChatGPT, used for a plethora of tasks from summarising concepts on the internet to serving up poems, news reports, and even a Colombian court judgment, did not feature in EU lawmakers’ plans for regulating AI technologies. The bloc’s 108-page proposal for the AI Act, published two years earlier, included only one mention of the word “chatbot.” By mid-April, however, members of the European Parliament were racing to update those rules to catch up with an explosion of interest in generative AI, which has provoked awe and anxiety since OpenAI unveiled ChatGPT six months ago.

Lawmakers now target the use of copyrighted material by companies deploying generative AI tools such as OpenAI’s ChatGPT or image generator Midjourney, as these tools train themselves from large sets of text and visual data on the internet. They will have to disclose any copyrighted material used to develop their systems. While the current draft does not clarify what obligations GPAIS manufacturers would be subject to, lawmakers are also debating whether all forms of GPAIS should be designated as high-risk. The draft could be amended multiple times before it actually comes into force.

How has the AI industry reacted?

While some industry players have welcomed the legislation, others have warned that broad and strict rules could stifle innovation. Companies have also raised concerns about transparency requirements, fearing that it could mean divulging trade secrets. Explainability requirements in the law have caused unease as it is often not possible for even developers to explain the functioning of algorithms.

Lawmakers and consumer groups, on the other hand, have criticised it for not fully addressing risks from AI systems.

The Act also delegates the process of standardisation for AI technologies to the EU’s expert standard-setting bodies in specific sectors. A Carnegie Endowment paper points out, however, that the standards process has historically been driven by industry, and it will be a challenge to ensure governments and the public have a meaningful seat at the table.

Where does global AI governance currently stand?

The rapidly evolving pace of AI development has led to diverging global views on how to regulate these technologies. The U.S., currently does not have comprehensive AI regulation and has taken a fairly hands-off approach. The Biden administration released a blueprint for an AI Bill of Rights (AIBoR). Developed by the White House Office of Science and Technology Policy (OSTP), the AIBoR outlines the harms of AI to economic and civil rights and lays down five principles for mitigating these harms. The blueprint, instead of a horizontal approach like the EU, endorses a sector-specific approach to AI governance, with policy interventions for individual sectors such as health, labour, and education, leaving it to sectoral federal agencies to come out with their plans. The AIBoR has been described by the administration as a guidance or a handbook rather than a binding legislation.

On the other end of the spectrum, China over the last year came out with some of the world’s first nationally binding regulations targeting specific types of algorithms and AI. It enacted a law to regulate recommendation algorithms with a focus on how they disseminate information. China’s Cyberspace Administration of China (CAC), which drafted the rules, told companies to “promote positive energy”, to not “endanger national security or the social public interest” and to “give an explanation” when they harm the legitimate interests of users. Another piece of legislation targets deep synthesis technology used to generate deepfakes. In order to have transparency and understand how algorithms function, China’s AI regulation authority has also created a registry or database of algorithms where developers have to register their algorithms, information about the data sets used by them and potential security risks.

THE GIST

Members of European Parliament reached a preliminary deal this week on a new draft of the European Union’s ambitious Artificial Intelligence Act.

Many AI tools are essentially black boxes, meaning even those who designed them cannot explain what goes on inside them to generate a particular output.

The new legislation seeks to strike a balance between promoting “the uptake of AI while mitigating or preventing harms associated with certain uses of the technology”.

4. India slips in press freedom index, ranks 161 out of 180 nations

World Press Freedom Index is to compare the level of press freedom enjoyed by media.

India’s ranking in the 2023 World Press Freedom Index has slipped to 161 out of 180 countries, according to the latest report released by the global media watchdog Reporters Without Borders (RSF). In comparison, Pakistan has fared better when it comes to media freedom as it was placed at 150, an improvement from last year’s 157th rank. In 2022, India was ranked at 150.

Sri Lanka also made significant improvement on the index, ranking 135th this year as against 146th in 2022.

Norway, Ireland and Denmark occupied the top three positions in press freedom, while Vietnam, China and North Korea constituted the bottom three.

Reporters Without Borders (RSF) comes out with a global ranking of press freedom every year. RSF is an international NGO whose self-proclaimed aim is to defend and promote media freedom. Headquartered in Paris, it has consultative status with the United Nations. The objective of the World Press Freedom Index, which it releases every year, “is to compare the level of press freedom enjoyed by journalists and media in 180 countries and territories” in the previous calendar year.

RSF defines press freedom as “the ability of journalists as individuals and collectives to select, produce, and disseminate news in the public interest independent of political, economic, legal, and social interference and in the absence of threats to their physical and mental safety”.

Concerns arise

The Indian Women’s Press Corps, the Press Club of India, and the Press Association released a joint statement voicing their concern over the country’s dip in the index.

“The indices of press freedom have worsened in several countries, including India, according to the latest RSF report,” the joint statement said.

“The constraints on press freedom due to hostile working conditions like contractorisation have to also be challenged. Insecure working conditions can never contribute to a free press,” it added.

5.  IMD forecasts a cyclone in the Bay of Bengal next week

At risk: Fishing boats should avoid the waters of southeast Bay of Bengal after May 7, the IMD has warned. 

First cyclone to form this year will be called Mocha; weather agency says various models suggest that the cyclone could form by May 9 and grow to a severe cyclonic storm the next day

The India Meteorological Department (IMD) on Wednesday said that a cyclone was likely to form in the Bay of Bengal by early next week, but its strength, direction and impact were yet to be gauged.

“A cyclonic circulation is likely to develop over southeast Bay of Bengal around May 6…there is a possibility of the circulation to move northwards towards central Bay of Bengal,” the IMD said in a statement. “Further details will be given after the low-pressure is formed.”

A low-pressure area is usually a precursor to the development of a cyclone, and according to the IMD’s calculations, it is expected to take shape on May 7.

A preliminary analysis available on the IMD website based on its weather models suggests that the cyclone could form by May 9 and grow to a “severe cyclonic storm” by May 10.

The IMD has a five-step classification for cyclones with the relatively weakest classified as a “cyclonic storm” (65-68 kmph) and the strongest a “super cyclonic storm” (>222 kmph). A “severe cyclonic storm” (89-117 kmph) is just one step above a “cyclonic storm”.

Depending on the location of the storm and existing weather conditions, it is possible for the storm to gain or reduce in strength. Cyclones are more frequent in India’s neighbourhood around May, October and November — or coincident with the advent and departure of the monsoon.

This will be the first cyclone to form this year and it will be called Cyclone Mocha. The name was proposed by Yemen, after the Red Sea port city, following an international convention on naming cyclones.

M. Mohapatra, Director-General, IMD, said that while greater clarity on the cyclone would only emerge later, the current warnings were for fishermen. “Seafaring vessels and the fishing community should avoid the waters of southeast Bay of Bengal after May 7because it will be turbulent.”

6. ‘Adapting to climate change to cost India ₹85.6 lakh crore by 2030

Green push: India needs a big improvement in its energy-mix in favour of renewables to about 80% by 2070-71, says RBI’s DEPR.

RBI’s Department of Economic and Policy Research sees need to cut energy intensity of GDP by 5% annually to achieve net zero target by 2070; green financing requirement estimated to be at least 2.5% of GDP a year to address infrastructure gap

The cumulative total expenditure for adapting to climate change in India is estimated to reach ₹85.6 lakh crore (at 2011-12 prices) by 2030, the Reserve Bank of India’s (RBI) Department of Economic and Policy Research (DEPR) said in its Report on Currency & Finance 2022-23.

India’s goal of achieving the net zero target by 2070 would require an accelerated reduction in the energy intensity of GDP by about 5% annually and a significant improvement in its energy-mix in favour of renewables to about 80% by 2070-71, the DEPR said in its report themed ‘Towards a Greener Cleaner India’.

India’s green financing requirement is estimated to be at least 2.5% of GDP annually till 2030 to address the infrastructure gap caused by climate events, and the financial system may have to mobilise adequate resources and also reallocate current resources to contribute effectively to the country’s net-zero target., it added.

Results of a climate stress-test reveal that public sector banks (PSBs) may be more vulnerable than private sector banks.. Globally, however, measurement of climate-related financial risks remains a work in progress.

“A pilot survey of key stakeholders in the financial system in India suggests that notwithstanding rising awareness about climate risks and their potential impact on the financial health of entities, risk mitigation plans are largely at the discussion stage and yet to be widely implemented,” the RBI’s policy researchers added.

Share on facebook
Facebook
Share on google
Google+
Share on twitter
Twitter
Share on linkedin
LinkedIn
Share on pinterest
Pinterest
kurukshetraiasacademy

kurukshetraiasacademy

Leave a Reply

Your email address will not be published. Required fields are marked *