Tag: ai

  • MediaTek Unveils Dimensity 9400 Chip For Latest AI Experiences |

    New Delhi: Chip-maker MediaTek on Wednesday launched Dimensity 9400, the new flagship smartphone chipset optimised for edge-AI applications, immersive gaming, incredible photography and more. The first smartphones powered by the Dimensity 9400 chip will be available in the market, starting in Q4, said the company.

    The Dimensity 9400, the fourth and latest in MediaTek’s flagship mobile SoC lineup, offers a boost in performance with its second-generation ‘All Big Core’ design built on Arm’s v9.2 CPU architecture, combined with the most advanced GPU and NPU for extreme performance in a super power-efficient design.

    Joe Chen, President at MediaTek, said the new chip will continue furthering “our mission to be the enablers of AI, supporting powerful applications that anticipate users’ needs and adapt to their preferences, while also fueling generative AI technology with on-device LoRA training and video generation”.

    The Dimensity 9400 offers 35 per cent faster single-core performance and 28 per cent faster multi-core performance compared to MediaTek’s previous generation flagship chipset, the Dimensity 9300.

    According to the company, built on TSMC’s second-generation 3nm process, the Dimensity 9400 is up to 40 per cent more power-efficient than its predecessor, allowing users to enjoy longer battery life.

    “As the fourth-generation flagship chipset, the Dimensity 9400 continues to build on our momentum of steady growth in market share, and MediaTek’s legacy of delivering flagship performance in the most efficient design for the best user experiences,” Chen added.

    To allow users to take advantage of the latest generative AI applications, the Dimensity 9400 offers up to 80 per cent faster large language model (LLM) prompt performance while also being up to 35 per cent more power efficient than the Dimensity 9300.

    The company said it is working with developers to offer a unified interface between AI agents, third-party APKs, and models that efficiently run both edge AI and cloud services. 

  • 1.77 Crore Mobile Connections Disconnected, 45 Lakh Spoofed Calls Blocked: Centre |

    New Delhi: The Centre on Friday informed that 1.77 crore mobile connections have been disconnected so far which used fake or forged documents, by using artificial intelligence (AI)-based tools. Moreover, four telecom service providers (TSPs) have successfully implemented an advanced system in collaboration with Department of Telecommunications (DoT), blocking 45 lakh spoofed international calls so far from entering the Indian telecom network.

    “The next phase, involving a centralised system that will eliminate the remaining spoofed calls across all TSPs, is expected to be commissioned shortly,” said the Ministry of Communications. DoT has introduced an advanced system designed to identify and block incoming international spoofed calls before they can reach Indian telecom subscribers.

    This system is being deployed in two phases — first at the TSP level to prevent calls spoofed with phone numbers of their own subscribers, and second, at a central level, to stop calls spoofed with the numbers of subscribers from other TSPs. As part of the action on 1.77 crore mobile connections, the Centre disconnected 33.48 lakh mobile connections and blocked 49,930 mobile handsets used by cyber criminals in cyber-crime hotspots/districts of the country.

    About 77.61 lakh mobile connections exceeded the prescribed limits for an individual have been disconnected and 2.29 lakh mobile phones involved in cyber-crime or fraudulent activities have been blocked. About 12.02 lakh out of 21.03 lakh reported stolen/lost mobile phones have been traced, and DoT and TSPs disconnected about 20,000 entities, 32,000 SMS headers and 2 lakh SMS templates involved in sending malicious SMSs.

    “About 11 lakhs accounts have been frozen by the banks and payments wallets which were linked to disconnected mobile connections taken on fake/forged documents,” informed the ministry. Nearly 11 lakhs WhatsApp profiles/accounts have been disengaged by WhatsApp which were linked to disconnected mobile connections taken on fake or forged documents.

    The DoT informed that 71,000 Point of Sale (SIM Agents) have been blacklisted so far and 365 FIRs have been registered in multiple states and UTs.

  • Is AI The Real Threat To Jobs, Privacy? Expert Sheds Light On Critical Aspects |

    New Delhi: AI is revolutionizing industries around the globe-from healthcare to the tech and creative industries-by automating tedious tasks and opening doors to new opportunities. While concerns about job displacement exist, AI offers avenues for growth through upskilling and the creation of roles that didn’t exist before.

    Ethical AI governance and public-private partnerships with appropriate cybersecurity infrastructure can ensure that this technology realizes humans’ best interests. As AI evolves, it transforms the global vista while finding a balance between progress, safety, and opportunity.

    In a recent email interview, Anand Birje, the CEO of Encora and former Digital Business Head of HCL Technologies, shared his insights on the existential risks posed by advanced technologies.

    How Is Generative AI Impacting Job Creation?

    AI is reshaping the job landscape, but it is not a simple story of replacement. We can see major shifts in healthcare, tech, creative fields and every vertical with AI augmenting the scope of existing roles by reducing repetitive and mundane tasks. However, while a percentage of roles that involve routine tasks may get phased out, AI will also create entirely new roles, responsibilities and positions that currently do not exist.

    For enterprises as well as individuals, the key to navigating these times of change is adaptation. According to him “We need to focus on training people and create a culture where upskilling and reskilling are constant. This cultural shift requires a change in individual mindset and must form an essential part of change management strategies for enterprises”.

    Forward-looking enterprises are already helping their people realize and appreciate the true scale of change being brought by AI–and the challenges, but also the opportunities this presents for them to progress in their careers.

    AI is not the existential threat to jobs that many fear, however, it will force us to reinvent the nature of work and evolve as individuals in the process to harness its full potential. You can draw a parallel with the wheel.

    Humans could and did travel and transport goods before its invention, but the wheel allowed us to save energy and time to focus on other areas and opened new avenues of progress for our civilization.

    End-to-End Encryption Fails to Prevent Data Leaks On Social Media Platforms?

    Trust in social media platforms nowadays is a big issue right now, affecting millions of users globally, including all of us. Encryption helps, but it is not enough; it’s just one piece of a complex puzzle. What we need is a multilayered approach that involves transparency, compliance, and accountability. Recent times have seen a shift in this direction, with companies disclosing the geographical location as well as how they plan to leverage user data.

    As for regulations, we need to find the right balance. According to him, “We need frameworks that protect users while still allowing for technological progress. These frameworks must address the unique complexities of different geographies, comply with local regulations and global standards, and safeguard user privacy while leaving room for innovation and creativity”.

    The tech industry must step up and adopt a ‘privacy by design’ approach. This means building guardrails into products and services from the ground up, not as an afterthought.

    This is truer than ever in a world where AI is being leveraged for identity theft, misinformation, and manipulation. Ultimately, building trust will require deeper collaboration between tech companies, regulators, and users themselves, and this is a key factor to consider as we redesign digital channels to adapt to an AI world.

    The Existential Risk of AI: Should We Be Concerned? 

    We should take these warnings seriously. But it is also crucial to differentiate between immediate, concrete risks and long-term, speculative concerns. The real threats we face today are not sci-fi scenarios of AI domination. They are more subtle – things like AI bias, privacy breaches, echo chambers, and the spread of misinformation. These are real problems affecting real people right now.

    To address these, we need collaboration. It is not something any one company or even one country can solve alone. According to him, “We need governments, tech firms, and academics working together to ensure that standards for ethics, transparency and compliance are set for areas that involve AI usage. Public education in the benefits of AI as well as the pitfalls associated with it is also important, to ensure safe use”.

    But here is the thing–while we work on these risks, we cannot forget the good AI can do. It is a powerful tool that could help solve big global problems. We need to be careful with AI, but also hopeful about what it can achieve. This is a big challenge for our generation, and we need to step up to it.

    Where Government Falls Short In Addressing Digital Fraud?

    Online financial fraud is a growing concern. While the government has made efforts, we are still playing catch-up. The main challenge is speed – cybercriminals move fast, and our legal and regulatory frameworks often struggle to keep up. With the advent of modern technologies such as Gen AI, cybercrime continues to grow in sophistication, scale, and speed.

    Regulatory bodies and government agencies must work together with technology companies and bring the best technological talent to bear against cybercrimes. According to him, ” We need to think outside the box, for instance, build a real-time threat sharing platform between technology companies and government agencies that can help identify and stop financial cybercrime in its tracks”.

    We also need a more proactive strategy and an update to the legal framework. Conventional laws are ill-equipped to deal with modern cybercrime and this can lead to apathy or lack of speed when addressing it.

    Digital literacy is crucial too, many frauds succeed simply because people are not aware of the risks. This holds true for a country like India, where widespread internet penetration to rural areas and so to the majority of the population is a new phenomenon.

    To sum up, the risk of AI being used for financial cybercrime is very real. To combat it effectively, we need better technology, smarter regulation, improved education, and closer collaboration across sectors.

    Is It Time For Governments To Regulate AI?

    In my view, some level of government oversight for AI is not just advisable, but necessary. Ideally created through public-private partnerships, this oversight is needed to ensure safety and ethical usage of AI even as the technology quickly becomes ubiquitous in our drive to infuse creativity and innovation across work streams.

    We need a framework that is flexible and adaptable and focuses on transparency, accountability, and fairness. The regulatory approach would depend heavily on local government bodies; however, it can be tiered so that the level of oversight and regulatory requirements are directly proportional to capabilities and potential impact.

    For instance, an AI being used to help marketers make their copy more engaging does not require the same level of oversight as an AI that helps process insurance claims for the healthcare industry.

    According to him, “We also need to think about AI’s broader societal impact and take active steps to address issues like job displacement and data privacy. By keeping them firmly in our sights, we can ensure that the policies being developed to regulate AI are in the best interest of the public and align with our values and human rights”.

    Effective AI regulation will require ongoing dialogue between policymakers, industry leaders, and the public. It is about striking the right balance between innovation and responsible development, harnessing the technology’s full potential while protecting our civilization from its side-effects.

    Are AI and Robotics A Danger To Humanity?

    Look, ‘Terminator’ makes for great entertainment, but we are far from that reality. AI for the first time can make decisions and has evolved from ‘tools’ to ‘agents’ and the real and immediate risks are not around AI taking over the world but how humans might misuse the massive potential that it brings to the table. At present, we should be more concerned about the use of AI for privacy invasions, autonomous weapons, misinformation, and disinformation.

    According to him, “We are at a crucial point in shaping its development, a few moments before the technology becomes ubiquitous. We need to prioritize safety and global governance frameworks, create clear ethical guidelines and failsafe mechanisms, invest in AI literacy, and keep humans in control of critical decisions”.

    Prevention is about being proactive. The goal should be to use AI wisely. We should not fear it, but we do need to guide it in the right direction. It is all about finding that sweet spot between progress and responsibility.

    How Vulnerable Are AI Military Systems To Cyberattacks?  

    This is an important question. As AI gets integrated more closely with our existing infrastructure, there are a few areas where it has the potential to cause the most chaos. According to him, AI in military systems is one of these areas that requires us to tread with extreme caution.

    From data poisoning to manipulate decisions and adversarial attacks to theft of sensitive data and unauthorized access, there are many ways AI integration can lead to vulnerabilities and challenges for the military and cause significant damage in the process.

    For instance, evasion attacks can be used to change the colour of a few pixels in a way that is imperceptible to the human eye. However, AI will now misclassify the images and do so with confidence. This can be used to attack AI systems involved in facial detection or target recognition, to disastrous consequences.

    So how do we tackle this? We need best-in-class cybersecurity and robust AI systems that can explain their decisions for human verification. This is an area where government agencies are advised to work closely with technology companies to implement AI systems that can identify and resist manipulation, bring in Zero Trust Architecture for sensitive digital infrastructure and involve humans in the decision-making process for important situations.

    AI should support military decision-making, not replace human judgment. 

  • Is AI The Real Threat To Jobs, Privacy? Expert Sheds Light On Critical Aspects |

    New Delhi: AI is revolutionizing industries around the globe-from healthcare to the tech and creative industries-by automating tedious tasks and opening doors to new opportunities. While concerns about job displacement exist, AI offers avenues for growth through upskilling and the creation of roles that didn’t exist before.

    Ethical AI governance and public-private partnerships with appropriate cybersecurity infrastructure can ensure that this technology realizes humans’ best interests. As AI evolves, it transforms the global vista while finding a balance between progress, safety, and opportunity.

    In a recent email interview, Anand Birje, the CEO of Encora and former Digital Business Head of HCL Technologies, shared his insights on the existential risks posed by advanced technologies.

    How Is Generative AI Impacting Job Creation?

    AI is reshaping the job landscape, but it is not a simple story of replacement. We can see major shifts in healthcare, tech, creative fields and every vertical with AI augmenting the scope of existing roles by reducing repetitive and mundane tasks. However, while a percentage of roles that involve routine tasks may get phased out, AI will also create entirely new roles, responsibilities and positions that currently do not exist.

    For enterprises as well as individuals, the key to navigating these times of change is adaptation. According to him “We need to focus on training people and create a culture where upskilling and reskilling are constant. This cultural shift requires a change in individual mindset and must form an essential part of change management strategies for enterprises”.

    Forward-looking enterprises are already helping their people realize and appreciate the true scale of change being brought by AI–and the challenges, but also the opportunities this presents for them to progress in their careers.

    AI is not the existential threat to jobs that many fear, however, it will force us to reinvent the nature of work and evolve as individuals in the process to harness its full potential. You can draw a parallel with the wheel.

    Humans could and did travel and transport goods before its invention, but the wheel allowed us to save energy and time to focus on other areas and opened new avenues of progress for our civilization.

    End-to-End Encryption Fails to Prevent Data Leaks?

    Trust in social media platforms nowadays is a big issue right now, affecting millions of users globally, including all of us. Encryption helps, but it is not enough; it’s just one piece of a complex puzzle. What we need is a multilayered approach that involves transparency, compliance, and accountability. Recent times have seen a shift in this direction, with companies disclosing the geographical location as well as how they plan to leverage user data.

    As for regulations, we need to find the right balance. According to him, “We need frameworks that protect users while still allowing for technological progress. These frameworks must address the unique complexities of different geographies, comply with local regulations and global standards, and safeguard user privacy while leaving room for innovation and creativity”.

    The tech industry must step up and adopt a ‘privacy by design’ approach. This means building guardrails into products and services from the ground up, not as an afterthought.

    This is truer than ever in a world where AI is being leveraged for identity theft, misinformation, and manipulation. Ultimately, building trust will require deeper collaboration between tech companies, regulators, and users themselves, and this is a key factor to consider as we redesign digital channels to adapt to an AI world.

    Should We Be Concerned About AI’s Existential Threat?  

    We should take these warnings seriously. But it is also crucial to differentiate between immediate, concrete risks and long-term, speculative concerns. The real threats we face today are not sci-fi scenarios of AI domination. They are more subtle – things like AI bias, privacy breaches, echo chambers, and the spread of misinformation. These are real problems affecting real people right now.

    To address these, we need collaboration. It is not something any one company or even one country can solve alone. According to him, “We need governments, tech firms, and academics working together to ensure that standards for ethics, transparency and compliance are set for areas that involve AI usage. Public education in the benefits of AI as well as the pitfalls associated with it is also important, to ensure safe use”.

    But here is the thing–while we work on these risks, we cannot forget the good AI can do. It is a powerful tool that could help solve big global problems. We need to be careful with AI, but also hopeful about what it can achieve. This is a big challenge for our generation, and we need to step up to it.

    Where Government Falls Short in Addressing Digital Fraud?

    Online financial fraud is a growing concern. While the government has made efforts, we are still playing catch-up. The main challenge is speed – cybercriminals move fast, and our legal and regulatory frameworks often struggle to keep up. With the advent of modern technologies such as Gen AI, cybercrime continues to grow in sophistication, scale, and speed.

    Regulatory bodies and government agencies must work together with technology companies and bring the best technological talent to bear against cybercrimes. According to him, ” We need to think outside the box, for instance, build a real-time threat sharing platform between technology companies and government agencies that can help identify and stop financial cybercrime in its tracks”.

    We also need a more proactive strategy and an update to the legal framework. Conventional laws are ill-equipped to deal with modern cybercrime and this can lead to apathy or lack of speed when addressing it.

    Digital literacy is crucial too, many frauds succeed simply because people are not aware of the risks. This holds true for a country like India, where widespread internet penetration to rural areas and so to the majority of the population is a new phenomenon.

    To sum up, the risk of AI being used for financial cybercrime is very real. To combat it effectively, we need better technology, smarter regulation, improved education, and closer collaboration across sectors.

    Should Governments Regulate AI?  

    In my view, some level of government oversight for AI is not just advisable, but necessary. Ideally created through public-private partnerships, this oversight is needed to ensure safety and ethical usage of AI even as the technology quickly becomes ubiquitous in our drive to infuse creativity and innovation across work streams.

    We need a framework that is flexible and adaptable and focuses on transparency, accountability, and fairness. The regulatory approach would depend heavily on local government bodies; however, it can be tiered so that the level of oversight and regulatory requirements are directly proportional to capabilities and potential impact.

    For instance, an AI being used to help marketers make their copy more engaging does not require the same level of oversight as an AI that helps process insurance claims for the healthcare industry.

    According to him, “We also need to think about AI’s broader societal impact and take active steps to address issues like job displacement and data privacy. By keeping them firmly in our sights, we can ensure that the policies being developed to regulate AI are in the best interest of the public and align with our values and human rights”.

    Effective AI regulation will require ongoing dialogue between policymakers, industry leaders, and the public. It is about striking the right balance between innovation and responsible development, harnessing the technology’s full potential while protecting our civilization from its side-effects.

    Are AI and Robotics a Danger to Humanity?

    Look, ‘Terminator’ makes for great entertainment, but we are far from that reality. AI for the first time can make decisions and has evolved from ‘tools’ to ‘agents’ and the real and immediate risks are not around AI taking over the world but how humans might misuse the massive potential that it brings to the table. At present, we should be more concerned about the use of AI for privacy invasions, autonomous weapons, misinformation, and disinformation.

    According to him, “We are at a crucial point in shaping its development, a few moments before the technology becomes ubiquitous. We need to prioritize safety and global governance frameworks, create clear ethical guidelines and failsafe mechanisms, invest in AI literacy, and keep humans in control of critical decisions”.

    Prevention is about being proactive. The goal should be to use AI wisely. We should not fear it, but we do need to guide it in the right direction. It is all about finding that sweet spot between progress and responsibility.

    How Vulnerable Are AI Military Systems To Cyberattacks?  

    This is an important question. As AI gets integrated more closely with our existing infrastructure, there are a few areas where it has the potential to cause the most chaos. According to him, AI in military systems is one of these areas that requires us to tread with extreme caution.

    From data poisoning to manipulate decisions and adversarial attacks to theft of sensitive data and unauthorized access, there are many ways AI integration can lead to vulnerabilities and challenges for the military and cause significant damage in the process.

    For instance, evasion attacks can be used to change the colour of a few pixels in a way that is imperceptible to the human eye. However, AI will now misclassify the images and do so with confidence. This can be used to attack AI systems involved in facial detection or target recognition, to disastrous consequences.

    So how do we tackle this? We need best-in-class cybersecurity and robust AI systems that can explain their decisions for human verification. This is an area where government agencies are advised to work closely with technology companies to implement AI systems that can identify and resist manipulation, bring in Zero Trust Architecture for sensitive digital infrastructure and involve humans in the decision-making process for important situations.

    AI should support military decision-making, not replace human judgment. 

  • Honda Partners With IIT Delhi And Bombay For AI-Powered Driver Assistance And Automated Driving Research |

    New Delhi: Japanese auto major Honda on Wednesday said it has started joint research on AI technologies with IIT Delhi and IIT Bombay with plans to develop driver assistance and automated driving technologies applicable in various regions of the world, including India.

    The joint research is aimed to further advance Honda CI (Cooperative Intelligence)– the original Honda AI that enables mutual understanding between machines and people, the company said in a statement.

    Honda Cars India Ltd (HCIL), a Honda subsidiary in India, will sign a joint research contract with the two IITs. “The IITs are a home to a large number of excellent researchers and engineers, and through the joint research with those institutes, Honda will strive to advance the underlying technologies of CI, with an eye toward the future applications for technologies that reduce traffic collisions and enable automated driving,” it said.

    With an aim to achieve further advancement of CI, Honda and IITs have set joint research themes such as recognition of the surrounding environment and cultivation of cooperative behaviour, and will conduct research and development while utilising the cutting-edge AI technologies, it added.

    Under the partnership, for each research theme, Honda associates and IIT professors will engage with IIT students for planning, designing, developing and testing technologies which work beyond the confines of the laboratory and thereby proceed with the research and development more flexibly and with a high degree of freedom, Honda said.

    “This will enable Honda and IITs to work in a more flexible environment with deeper exchange of academic and industry insights,” it added.

    In addition, as part of this research, Honda with the help of IITs is aiming to conduct verification of driving assistance and automated driving technologies in the suburbs of Delhi and in Mumbai.

    Due to numerous variations in the road systems and a large number of road users, India has a complex traffic environment where situations that occur frequently are difficult for AI to predict.

    “By conducting technology verification in such a technically challenging environment, Honda and IITs will refine the underlying technologies of CI and strive to apply them to future driver assistance and automated driving technologies in various regions of the world, including India,” the statement said.

    Honda said it has been actively hiring IIT graduates since 2019, and many of them are now playing key roles in the areas of mobility intelligence, including the research and development of CI.

  • ‘AI Will Be Of More Value To Us Than We Imagined’: Anand Mahindra |

    New Delhi: Mahindra Group Chairman Anand Mahindra has said “Artificial intelligence (AI) will be of more value to us than we imagined”. He said this citing a research that showed AI can detect breast cancer five years before it develops.

    “If this is accurate, then AI is going to be of significantly more value to us than we imagined and much earlier than we had imagined…,” said Anand Mahindra in a post on X.com.

    If this is accurate, then AI is going to be of significantly more value to us than we imagined and much earlier than we had imagined… https://t.co/5Mo2cT7X7T
    — anand mahindra (@anandmahindra) July 28, 2024

    Several studies show the potential of AI in the early detection of cancers. Advanced technology is also paving the way for the development of new drugs to predict the treatment outcome and prognosis.

    Recently, a team of researchers from Duke University in the US developed a new, interpretable artificial intelligence (AI) model to predict 5-year breast cancer risk from mammograms. Another study, published in the journal Radiology, showed AI algorithms outperformed the standard clinical risk model for predicting the five-year risk for breast cancer.

    Biopsy, histological examinations under microscopes, and imaging tests such as MRI, CT, and PET scans are traditional approaches to diagnosing cancer. While the interpretation of these tests is likely to vary among professionals, AI systems, especially those using deep learning techniques, can analyse medical images with staggering accuracy.

    It can also detect minute anomalies often missed by the human eye, reducing false negatives. It can also aid in early detection which can boost treatment outcomes. It can also boost the growth of personalised medicine.

    Vineet Nakra, a radiation oncologist at Max Super Speciality Hospital, told IANS that AI is helping pathologists diagnose cancer much faster and paving the way for doctors to make personalised and patient-centric cancer care.

  • Zen Technologies Launches AI-Powered Robot For Global Defense Market |

    New Delhi: Zen Technologies, an anti-drone technology and defense training solutions provider, in collaboration with its subsidiary AI Turing Technologies on Monday introduced the AI-powered robot Prahasta, among other products, for the global defense market.

    Prahasta is an automated quadruped that uses LiDAR (light detection and ranging) and reinforcement learning to understand and create real-time 3D terrain mapping for unparalleled mission planning, navigation, and threat assessment.

    The company also launched the anti-drone system camera Hawkeye, remote-controlled weapon station Barbarik-URCWS, and Sthir Stab 640, a rugged stabilized sight designed mainly for armored vehicles, ICVs, and boats.

    “These innovations represent a significant advancement in autonomous defense operations. We believe the launch of these products will raise awareness around the need to integrate advanced robotics into combat and reconnaissance missions.

    Our self-funded products will further enable Zen to offer an expanded range of cutting-edge technologies to both current and prospective clients,” Zen Technologies’ Chairman and Managing Director Ashok Atluri said.

    The Hyderabad-based firm claims Barbarik-URCWS to be the world’s lightest remote-controlled weapon station, offering precise targeting capabilities (5.56 mm to 7.62 mm calibers) for ground vehicles and naval vessels, maximizing battlefield effectiveness while minimizing personnel risk.

    The Shares of Zen Technologies settled at Rs 1,362.00 apiece on BSE on Monday, up 5 percent from the previous close.

  • Microsoft Quits OpenAI Board Seat As Antitrust Scrutiny Of Artificial Intelligence Pacts Intensifies |

    Washington: Microsoft has relinquished its seat on the board of OpenAI, saying its participation is no longer needed because the ChatGPT maker has improved its governance since being roiled by boardroom chaos last year.

    In a Tuesday letter, Microsoft confirmed it was resigning, “effective immediately,” from its role as an observer on the artificial intelligence company’s board. “We appreciate the support shown by OpenAI leadership and the OpenAI board as we made this decision,” the letter said.

    The surprise departure comes amid intensifying scrutiny from antitrust regulators of the powerful AI partnership. Microsoft has reportedly invested USD 13 billion in OpenAI.

    European Union regulators said last month that they would take a fresh look at the partnership under the 27-nation bloc’s antitrust rules, while the US Federal Trade Commission and Britain’s competition watchdog have also been examining the pact.

    Microsoft took the board seat following a power struggle in which OpenAI CEO Sam Altman was fired, then quickly reinstated, while the board members behind the ouster were pushed out. “Over the past eight months we have witnessed significant progress by the newly formed board and are confident in the company’s direction,” Microsoft said in its letter. “Given all of this we no longer believe our limited role as an observer is necessary.” With Microsoft’s departure, OpenAI will no longer have observer seats on its board.

    “We are grateful to Microsoft for voicing confidence in the Board and the direction of the company, and we look forward to continuing our successful partnership,” OpenAI said in a statement.

    It’s not hard to conclude that Microsoft’s decision to ditch the board seat was heavily influenced by rising scrutiny of big technology companies and their links with AI startups, said Alex Haffner, a competition partner at UK law firm Fladgate.

    “It is clear that regulators are very much focused on the complex web of inter-relationships that Big Tech has created with AI providers, hence the need for Microsoft and others to carefully consider how they structure these arrangements going forward,” he said.

    OpenAI said it would take a new approach to “informing and engaging key strategic partners” such as Microsoft and Apple and investors such as Thrive Capital and Khosla Ventures, with regular meetings to update stakeholders on progress and ensure stronger collaboration on safety and security.

  • Maharashtra To Receive AI Support Through ‘MARVEL’ To Expeditiously Solve Crimes |

    Mumbai: Amid an increase in the application of Artificial Intelligence (AI) across several fields, the Maharashtra Police have integrated AI to expeditiously solve various crimes, including burgeoning cyber and financial crimes, with the establishment of the Maharashtra Research and Vigilance for Enhanced Law Enforcement (MARVEL).  

    The company’s mandate is to strengthen intelligence capabilities and improve the state police’s ability to predict and prevent crimes using AI. According to the state government, Maharashtra is the first state in the country to create such an independent entity for law enforcement.

    The government will provide 100 per cent share capital to MARVEL for the first five years, amounting to Rs 4.2 crore annually. The first installment of this share capital has recently been distributed, marking a significant step towards modernising law enforcement in the state.

    On March 22, 2024, a tripartite agreement was signed between the Maharashtra government, the Indian Institute of Management Nagpur, and Pinaka Technologies Private Limited to establish ‘MARVEL’. The company is registered under the Companies Act 2013, aiming to enhance law enforcement capabilities in Maharashtra through advanced AI technologies.

    The integration of AI into the police force is expected to benefit crime-solving and prevention efforts by teaching machines to analyse information and mimic human thought processes. Additionally, analysing available data can help predict potential crime hotspots and areas prone to law and order disruptions.

    A Home Department officer said that Pinaka Technologies Private Limited, a Chennai-based company with experience in providing AI solutions to entities such as the Indian Navy, the Intelligence Department of Andhra Pradesh, the Income Tax Department, and SEBI, is collaborating on this venture.

    The ‘MARVEL’ office is situated within the premises of the Indian Institute of Management in Nagpur, leveraging the institute’s expertise. While Pinaka will deliver AI solutions tailored to the police force’s needs, the Indian Institute of Management Nagpur will collaborate on research and training initiatives.

    The Superintendent of Police, Nagpur (Rural), and the Director of Indian Institute of Management Nagpur, will serve as ex-officio directors of the company. Additionally, the Director of Pinaka Technologies Private Limited will also come on board. The Superintendent of Police, Nagpur (Rural), will hold the ex-officio position of Chief Executive Officer.

  • Did You Know Real-life Impact Of AI And Robotics On Jobs In India?; All You Need To Know |

    Impact of AI And Robotics On Job: In the world of fast-paced technology, the development of robots and artificial intelligence (AI) has revolutionized the industrial landscape of today. AI and robotics are hailed as mechanical wonders that guarantee effectiveness, efficiency, and development. These headways additionally bring up basic issues about their effect on work across different areas or sectors.

    There are concerns about job displacement and workforce restructuring are growing as robots and AI systems become more sophisticated. However, policymakers, businesses, and individuals alike must have a thorough understanding of the nuanced effects of automation on various industries.  

    Role Of AI And Robots in the Manufacturing Sector:

    Notably, AI and robotics have significantly altered traditional job roles in the manufacturing sector. Earlier, fabricating has consistently depended on industrial facilities and mechanical production systems, which have utilized a large number of individuals everywhere. However, this landscape has undergone significant transformations because of the incorporation of robotic automation.

    “Applying robotics and artificial intelligence institutionalizes change in industries with fear of job loss in various sectors. Manufacturing is witnessing automation of work activities ending up with new positions that require technical proficiencies. Manufacturing is not the only sector experiencing such effects, though; the same goes for healthcare and transport”, said Sanjeev Kumar, Founder and CEO of Alphadroid.  

    Sanjeev further mentioned, ” government needs to invest in STEM (science, innovation, designing, and math) and lifelong learning, encourage more cross-sector collaboration, and safeguard structural transformative ness to counter automation’s impact and prioritize job creation”,

    In the arena of dynamic technological change, the robots are outfitted with cutting-edge sensors and AI calculations that can perform tasks with accuracy and speed, frequently unparalleled human abilities. Hence, low-skilled workers are losing their jobs because of the increasing automation of manual and repetitive tasks.

    Creating More Job Opportunities:

    Despite these difficulties, the development of mechanical technology and man-made intelligence are opening the doors for job opportunities. As we all know, some tasks become automated today, and new roles emerge that require technical expertise, problem-solving skills, and human oversight.

    Technicians, for instance, are needed to maintain and program robots, engineers are needed to design and make automated systems work better, and data analysts are needed to get information from production processes. A shift toward more specialized and dynamic job roles is reflected in the rising demand for workers with interdisciplinary skills, such as robotics engineering and business acumen.

    Robots and artificial intelligence have an impact on employment opportunities and challenges in a wide range of industries other than manufacturing. The rise of autonomous vehicles poses a threat to the livelihoods of truck drivers and delivery workers in the transportation sector. Programming engineers, information examiners, and online protection experts are expected to help these innovations simultaneously. To adapt to AI-powered diagnostic tools and robotic surgical systems, healthcare professionals may need to retrain.However, they may also increase efficiency and accuracy.

    AI And Automation May Generate 555 Million New Jobs

    A McKinsey Global Institute report states that while AI and automation may result in the loss of approximately 400 million jobs worldwide, they may also generate up to 555 million new jobs. There will be critical work development driven by rising wages, expanded medical care spending, and interests in framework, energy, and innovation through 2030.

    The gains will be greatest in emerging economies like India, which have populations of working-age people growing rapidly. Additional jobs will be created because of economic expansion and rising productivity.

    Jobs In Creative Fields:  

    While certain businesses experience critical disturbances, others remain moderately protected because of the intricacy of undertakings or the need for human collaboration. For instance, jobs in creative fields like design, art, and content creation are less likely to be automated because they rely heavily on human creativity and emotional intelligence.

    Similarly, it is difficult to replicate with AI and robotics alone the interpersonal skills and empathy required for service-oriented roles in hospitality and customer service. Policymakers and businesses alike must adopt tailored strategies to minimize the risk of job losses and maximize the benefits of technological innovation because of the sector-specific implications of automation.

    Shaping The Future of AI and Robotics:

    Public-private partnerships can drive AI and robotics research and development, whereas societal values and ethical considerations drive innovation. Additionally, programs like apprenticeships, career transitions, job placement services, subsidies for reskilling and upskilling, and job placement services can assist individuals in finding new employment opportunities. The wider socioeconomic effects of automation, such as the redefinition of labour rights, job polarization, and income inequality, must be addressed simultaneously by policymakers.

    Job Security In The Age of AI

    The negative effects of job displacement can be mitigated and inclusive economic growth promoted by putting into place measures like universal basic income, progressive taxation of profits generated by automation, and assistance for worker retraining.

    Besides, encouraging a culture of long-lasting learning and variation is fundamental for cultivating flexibility despite innovative disturbance. Eventually, innovative headways, market influences, and cultural standards shape the complex and dynamic effect of man-made intelligence and robots on positions.

    While customary business models face challenges because of mechanization, it additionally presents new open doors for monetary development, business, and advancement. By adopting a holistic approach that prioritizes human-centric solutions and takes into account sector-specific dynamics, society can navigate the complexities of the automated future while ensuring that no one is left behind.