Tag: generative ai

  • Is AI The Real Threat To Jobs, Privacy? Expert Sheds Light On Critical Aspects |

    New Delhi: AI is revolutionizing industries around the globe-from healthcare to the tech and creative industries-by automating tedious tasks and opening doors to new opportunities. While concerns about job displacement exist, AI offers avenues for growth through upskilling and the creation of roles that didn’t exist before.

    Ethical AI governance and public-private partnerships with appropriate cybersecurity infrastructure can ensure that this technology realizes humans’ best interests. As AI evolves, it transforms the global vista while finding a balance between progress, safety, and opportunity.

    In a recent email interview, Anand Birje, the CEO of Encora and former Digital Business Head of HCL Technologies, shared his insights on the existential risks posed by advanced technologies.

    How Is Generative AI Impacting Job Creation?

    AI is reshaping the job landscape, but it is not a simple story of replacement. We can see major shifts in healthcare, tech, creative fields and every vertical with AI augmenting the scope of existing roles by reducing repetitive and mundane tasks. However, while a percentage of roles that involve routine tasks may get phased out, AI will also create entirely new roles, responsibilities and positions that currently do not exist.

    For enterprises as well as individuals, the key to navigating these times of change is adaptation. According to him “We need to focus on training people and create a culture where upskilling and reskilling are constant. This cultural shift requires a change in individual mindset and must form an essential part of change management strategies for enterprises”.

    Forward-looking enterprises are already helping their people realize and appreciate the true scale of change being brought by AI–and the challenges, but also the opportunities this presents for them to progress in their careers.

    AI is not the existential threat to jobs that many fear, however, it will force us to reinvent the nature of work and evolve as individuals in the process to harness its full potential. You can draw a parallel with the wheel.

    Humans could and did travel and transport goods before its invention, but the wheel allowed us to save energy and time to focus on other areas and opened new avenues of progress for our civilization.

    End-to-End Encryption Fails to Prevent Data Leaks On Social Media Platforms?

    Trust in social media platforms nowadays is a big issue right now, affecting millions of users globally, including all of us. Encryption helps, but it is not enough; it’s just one piece of a complex puzzle. What we need is a multilayered approach that involves transparency, compliance, and accountability. Recent times have seen a shift in this direction, with companies disclosing the geographical location as well as how they plan to leverage user data.

    As for regulations, we need to find the right balance. According to him, “We need frameworks that protect users while still allowing for technological progress. These frameworks must address the unique complexities of different geographies, comply with local regulations and global standards, and safeguard user privacy while leaving room for innovation and creativity”.

    The tech industry must step up and adopt a ‘privacy by design’ approach. This means building guardrails into products and services from the ground up, not as an afterthought.

    This is truer than ever in a world where AI is being leveraged for identity theft, misinformation, and manipulation. Ultimately, building trust will require deeper collaboration between tech companies, regulators, and users themselves, and this is a key factor to consider as we redesign digital channels to adapt to an AI world.

    The Existential Risk of AI: Should We Be Concerned? 

    We should take these warnings seriously. But it is also crucial to differentiate between immediate, concrete risks and long-term, speculative concerns. The real threats we face today are not sci-fi scenarios of AI domination. They are more subtle – things like AI bias, privacy breaches, echo chambers, and the spread of misinformation. These are real problems affecting real people right now.

    To address these, we need collaboration. It is not something any one company or even one country can solve alone. According to him, “We need governments, tech firms, and academics working together to ensure that standards for ethics, transparency and compliance are set for areas that involve AI usage. Public education in the benefits of AI as well as the pitfalls associated with it is also important, to ensure safe use”.

    But here is the thing–while we work on these risks, we cannot forget the good AI can do. It is a powerful tool that could help solve big global problems. We need to be careful with AI, but also hopeful about what it can achieve. This is a big challenge for our generation, and we need to step up to it.

    Where Government Falls Short In Addressing Digital Fraud?

    Online financial fraud is a growing concern. While the government has made efforts, we are still playing catch-up. The main challenge is speed – cybercriminals move fast, and our legal and regulatory frameworks often struggle to keep up. With the advent of modern technologies such as Gen AI, cybercrime continues to grow in sophistication, scale, and speed.

    Regulatory bodies and government agencies must work together with technology companies and bring the best technological talent to bear against cybercrimes. According to him, ” We need to think outside the box, for instance, build a real-time threat sharing platform between technology companies and government agencies that can help identify and stop financial cybercrime in its tracks”.

    We also need a more proactive strategy and an update to the legal framework. Conventional laws are ill-equipped to deal with modern cybercrime and this can lead to apathy or lack of speed when addressing it.

    Digital literacy is crucial too, many frauds succeed simply because people are not aware of the risks. This holds true for a country like India, where widespread internet penetration to rural areas and so to the majority of the population is a new phenomenon.

    To sum up, the risk of AI being used for financial cybercrime is very real. To combat it effectively, we need better technology, smarter regulation, improved education, and closer collaboration across sectors.

    Is It Time For Governments To Regulate AI?

    In my view, some level of government oversight for AI is not just advisable, but necessary. Ideally created through public-private partnerships, this oversight is needed to ensure safety and ethical usage of AI even as the technology quickly becomes ubiquitous in our drive to infuse creativity and innovation across work streams.

    We need a framework that is flexible and adaptable and focuses on transparency, accountability, and fairness. The regulatory approach would depend heavily on local government bodies; however, it can be tiered so that the level of oversight and regulatory requirements are directly proportional to capabilities and potential impact.

    For instance, an AI being used to help marketers make their copy more engaging does not require the same level of oversight as an AI that helps process insurance claims for the healthcare industry.

    According to him, “We also need to think about AI’s broader societal impact and take active steps to address issues like job displacement and data privacy. By keeping them firmly in our sights, we can ensure that the policies being developed to regulate AI are in the best interest of the public and align with our values and human rights”.

    Effective AI regulation will require ongoing dialogue between policymakers, industry leaders, and the public. It is about striking the right balance between innovation and responsible development, harnessing the technology’s full potential while protecting our civilization from its side-effects.

    Are AI and Robotics A Danger To Humanity?

    Look, ‘Terminator’ makes for great entertainment, but we are far from that reality. AI for the first time can make decisions and has evolved from ‘tools’ to ‘agents’ and the real and immediate risks are not around AI taking over the world but how humans might misuse the massive potential that it brings to the table. At present, we should be more concerned about the use of AI for privacy invasions, autonomous weapons, misinformation, and disinformation.

    According to him, “We are at a crucial point in shaping its development, a few moments before the technology becomes ubiquitous. We need to prioritize safety and global governance frameworks, create clear ethical guidelines and failsafe mechanisms, invest in AI literacy, and keep humans in control of critical decisions”.

    Prevention is about being proactive. The goal should be to use AI wisely. We should not fear it, but we do need to guide it in the right direction. It is all about finding that sweet spot between progress and responsibility.

    How Vulnerable Are AI Military Systems To Cyberattacks?  

    This is an important question. As AI gets integrated more closely with our existing infrastructure, there are a few areas where it has the potential to cause the most chaos. According to him, AI in military systems is one of these areas that requires us to tread with extreme caution.

    From data poisoning to manipulate decisions and adversarial attacks to theft of sensitive data and unauthorized access, there are many ways AI integration can lead to vulnerabilities and challenges for the military and cause significant damage in the process.

    For instance, evasion attacks can be used to change the colour of a few pixels in a way that is imperceptible to the human eye. However, AI will now misclassify the images and do so with confidence. This can be used to attack AI systems involved in facial detection or target recognition, to disastrous consequences.

    So how do we tackle this? We need best-in-class cybersecurity and robust AI systems that can explain their decisions for human verification. This is an area where government agencies are advised to work closely with technology companies to implement AI systems that can identify and resist manipulation, bring in Zero Trust Architecture for sensitive digital infrastructure and involve humans in the decision-making process for important situations.

    AI should support military decision-making, not replace human judgment. 

  • Is AI The Real Threat To Jobs, Privacy? Expert Sheds Light On Critical Aspects |

    New Delhi: AI is revolutionizing industries around the globe-from healthcare to the tech and creative industries-by automating tedious tasks and opening doors to new opportunities. While concerns about job displacement exist, AI offers avenues for growth through upskilling and the creation of roles that didn’t exist before.

    Ethical AI governance and public-private partnerships with appropriate cybersecurity infrastructure can ensure that this technology realizes humans’ best interests. As AI evolves, it transforms the global vista while finding a balance between progress, safety, and opportunity.

    In a recent email interview, Anand Birje, the CEO of Encora and former Digital Business Head of HCL Technologies, shared his insights on the existential risks posed by advanced technologies.

    How Is Generative AI Impacting Job Creation?

    AI is reshaping the job landscape, but it is not a simple story of replacement. We can see major shifts in healthcare, tech, creative fields and every vertical with AI augmenting the scope of existing roles by reducing repetitive and mundane tasks. However, while a percentage of roles that involve routine tasks may get phased out, AI will also create entirely new roles, responsibilities and positions that currently do not exist.

    For enterprises as well as individuals, the key to navigating these times of change is adaptation. According to him “We need to focus on training people and create a culture where upskilling and reskilling are constant. This cultural shift requires a change in individual mindset and must form an essential part of change management strategies for enterprises”.

    Forward-looking enterprises are already helping their people realize and appreciate the true scale of change being brought by AI–and the challenges, but also the opportunities this presents for them to progress in their careers.

    AI is not the existential threat to jobs that many fear, however, it will force us to reinvent the nature of work and evolve as individuals in the process to harness its full potential. You can draw a parallel with the wheel.

    Humans could and did travel and transport goods before its invention, but the wheel allowed us to save energy and time to focus on other areas and opened new avenues of progress for our civilization.

    End-to-End Encryption Fails to Prevent Data Leaks?

    Trust in social media platforms nowadays is a big issue right now, affecting millions of users globally, including all of us. Encryption helps, but it is not enough; it’s just one piece of a complex puzzle. What we need is a multilayered approach that involves transparency, compliance, and accountability. Recent times have seen a shift in this direction, with companies disclosing the geographical location as well as how they plan to leverage user data.

    As for regulations, we need to find the right balance. According to him, “We need frameworks that protect users while still allowing for technological progress. These frameworks must address the unique complexities of different geographies, comply with local regulations and global standards, and safeguard user privacy while leaving room for innovation and creativity”.

    The tech industry must step up and adopt a ‘privacy by design’ approach. This means building guardrails into products and services from the ground up, not as an afterthought.

    This is truer than ever in a world where AI is being leveraged for identity theft, misinformation, and manipulation. Ultimately, building trust will require deeper collaboration between tech companies, regulators, and users themselves, and this is a key factor to consider as we redesign digital channels to adapt to an AI world.

    Should We Be Concerned About AI’s Existential Threat?  

    We should take these warnings seriously. But it is also crucial to differentiate between immediate, concrete risks and long-term, speculative concerns. The real threats we face today are not sci-fi scenarios of AI domination. They are more subtle – things like AI bias, privacy breaches, echo chambers, and the spread of misinformation. These are real problems affecting real people right now.

    To address these, we need collaboration. It is not something any one company or even one country can solve alone. According to him, “We need governments, tech firms, and academics working together to ensure that standards for ethics, transparency and compliance are set for areas that involve AI usage. Public education in the benefits of AI as well as the pitfalls associated with it is also important, to ensure safe use”.

    But here is the thing–while we work on these risks, we cannot forget the good AI can do. It is a powerful tool that could help solve big global problems. We need to be careful with AI, but also hopeful about what it can achieve. This is a big challenge for our generation, and we need to step up to it.

    Where Government Falls Short in Addressing Digital Fraud?

    Online financial fraud is a growing concern. While the government has made efforts, we are still playing catch-up. The main challenge is speed – cybercriminals move fast, and our legal and regulatory frameworks often struggle to keep up. With the advent of modern technologies such as Gen AI, cybercrime continues to grow in sophistication, scale, and speed.

    Regulatory bodies and government agencies must work together with technology companies and bring the best technological talent to bear against cybercrimes. According to him, ” We need to think outside the box, for instance, build a real-time threat sharing platform between technology companies and government agencies that can help identify and stop financial cybercrime in its tracks”.

    We also need a more proactive strategy and an update to the legal framework. Conventional laws are ill-equipped to deal with modern cybercrime and this can lead to apathy or lack of speed when addressing it.

    Digital literacy is crucial too, many frauds succeed simply because people are not aware of the risks. This holds true for a country like India, where widespread internet penetration to rural areas and so to the majority of the population is a new phenomenon.

    To sum up, the risk of AI being used for financial cybercrime is very real. To combat it effectively, we need better technology, smarter regulation, improved education, and closer collaboration across sectors.

    Should Governments Regulate AI?  

    In my view, some level of government oversight for AI is not just advisable, but necessary. Ideally created through public-private partnerships, this oversight is needed to ensure safety and ethical usage of AI even as the technology quickly becomes ubiquitous in our drive to infuse creativity and innovation across work streams.

    We need a framework that is flexible and adaptable and focuses on transparency, accountability, and fairness. The regulatory approach would depend heavily on local government bodies; however, it can be tiered so that the level of oversight and regulatory requirements are directly proportional to capabilities and potential impact.

    For instance, an AI being used to help marketers make their copy more engaging does not require the same level of oversight as an AI that helps process insurance claims for the healthcare industry.

    According to him, “We also need to think about AI’s broader societal impact and take active steps to address issues like job displacement and data privacy. By keeping them firmly in our sights, we can ensure that the policies being developed to regulate AI are in the best interest of the public and align with our values and human rights”.

    Effective AI regulation will require ongoing dialogue between policymakers, industry leaders, and the public. It is about striking the right balance between innovation and responsible development, harnessing the technology’s full potential while protecting our civilization from its side-effects.

    Are AI and Robotics a Danger to Humanity?

    Look, ‘Terminator’ makes for great entertainment, but we are far from that reality. AI for the first time can make decisions and has evolved from ‘tools’ to ‘agents’ and the real and immediate risks are not around AI taking over the world but how humans might misuse the massive potential that it brings to the table. At present, we should be more concerned about the use of AI for privacy invasions, autonomous weapons, misinformation, and disinformation.

    According to him, “We are at a crucial point in shaping its development, a few moments before the technology becomes ubiquitous. We need to prioritize safety and global governance frameworks, create clear ethical guidelines and failsafe mechanisms, invest in AI literacy, and keep humans in control of critical decisions”.

    Prevention is about being proactive. The goal should be to use AI wisely. We should not fear it, but we do need to guide it in the right direction. It is all about finding that sweet spot between progress and responsibility.

    How Vulnerable Are AI Military Systems To Cyberattacks?  

    This is an important question. As AI gets integrated more closely with our existing infrastructure, there are a few areas where it has the potential to cause the most chaos. According to him, AI in military systems is one of these areas that requires us to tread with extreme caution.

    From data poisoning to manipulate decisions and adversarial attacks to theft of sensitive data and unauthorized access, there are many ways AI integration can lead to vulnerabilities and challenges for the military and cause significant damage in the process.

    For instance, evasion attacks can be used to change the colour of a few pixels in a way that is imperceptible to the human eye. However, AI will now misclassify the images and do so with confidence. This can be used to attack AI systems involved in facial detection or target recognition, to disastrous consequences.

    So how do we tackle this? We need best-in-class cybersecurity and robust AI systems that can explain their decisions for human verification. This is an area where government agencies are advised to work closely with technology companies to implement AI systems that can identify and resist manipulation, bring in Zero Trust Architecture for sensitive digital infrastructure and involve humans in the decision-making process for important situations.

    AI should support military decision-making, not replace human judgment. 

  • Hanooman AI Launched In India With Support For 98 Languages—Here’s What You Need To Know |

    New Delhi: 3AI Holding Limited, an AI investment company from Abu Dhabi and SML India have launched Hanooman which is a generative artificial intelligence (GenAI) platform. Hanooman supports 98 global languages including 12 Indian languages.

    Hanooman, a generative artificial intelligence platform aims to reach 200 million users in the first year. It’s already available for download in India and accessible via the web and through a mobile app for android users on the Play Store. An iOS version will be coming soon to the App Store. (Also Read: Tech Showdown: iPad Air 6th Gen (2024) vs iPad Air 5th Gen (2022); Is the Rs 5,000 Increment Worth It?)

    It has been designed to support 12 Indian languages. These languages include Hindi, Marathi, Gujarati, Bengali, Kannada, Odia, Punjabi, Assamese, Tamil, Telugu, Malayalam, and Sindhi. “Through our strategic partnership with SML India, we strive to cater to a diverse spectrum of users, making AI inclusive and available to everyone, regardless of their ethnicity or location,” Arjun Prasad, MD of 3AI Holding, said in a statement. (Also Read: Motorola Launches Moto Buds And Moto Buds+ Earbuds In India: Check Price, Offers, Specs And More)

    “With its launch, we aim to impact the lives of 200 million users within the first year alone,” said Vishnu Vardhan, Co-Founder & CEO, SML India. “About 80 per cent of Indians can’t use English, hence, Hanooman’s capabilities to support Indian languages will bring GenAI to the reach of everyone in India and open massive opportunities for companies and startups bringing Gen AI products to the market,” he added.

    As part of the launch, SML India announced its partnership with leading technology stalwarts and innovators like HP, NASSCOM, and Yotta. Through the partnership, Yotta will provide GPU cloud infrastructure to bolster SML India’s operations.

    Additionally, its partnership with NASSCOM is aimed at several initiatives, like supporting AI startups, fostering fintech innovation, engaging with 3,000 colleges, and participating in research programmes. (With IANS Inputs)

  • Google Maps Utilizes Generative AI For Uncovering New Places; Check Details Here |

    New Delhi: Google Maps is set to undergo a significant upgrade with the integration of generative AI aiming to enhance user experiences. Through this new feature, the tech giant plans to revolutionize how users discover places, receive recommendations, and interact with the app.

    The addition of large language models and personalized suggestions marks a notable shift in the capabilities of Google Maps offering a glimpse into the future of AI-driven navigation and exploration.

    According to a blog post from Google, Maps will utilize large language models (LLM) to examine over 250 million locations and input from more than 300 million Local Guides. This will enable the app to provide suggestions tailored to user preferences by considering details from nearby businesses, including photos, reviews, and ratings. (Also Read: Apple Gears Up For Foldable Future: Report Indicates Entry Into Foldable Device Market by 2027)

    Users can also pose additional questions, such as ‘How about lunch?’, to receive recommendations for places that align with their previous inquiries. Subsequently, they will have the option to include the suggested place in a list or share it with friends. (Also Read: Apple Declares Final MacBook Featuring Disc Drive As ‘Obsolete’)

    As mentioned by Tech giant, users can inquire with Maps about activities suitable for a rainy day. In response, the app will provide suggestions for indoor activities, such as comedy shows or movie theaters in the nearby area, along with reviews from individuals who have already rated those places.

    The initial functionality supported by generative AI will only be accessible to a small group of Local Guides located in the United States. However, Google has not provided details about when it will become available for individuals residing in other countries.

    Although the differences between the new search results and traditional queries are not clear yet, it’s likely that the company will use generative AI to provide conversational Bard-style responses instead of presenting a list of places or activities. While currently available to a limited audience, the potential for this innovative feature to expand globally hints at an exciting future for navigation technology. 

  • Amazon Fall 2023 Launch Event: Amazon brings many products including new Fire TV Stick, Soundbar, know the price

    Amazon Fall 2023 Launch Event: Amazon has introduced its popular devices in a new form. At the September hardware launch event, the company announced new Echo and Fire TV devices. It was also told how Generative AI is being used in Amazon Alexa. According to the company, thanks to Generative AI, Alexa is ready to become a more powerful and conversational virtual assistant than ever before. Customers will be able to preview new generative AI features. These features will work on all Echo devices present in the market. Let us know the important things about the hardware launch event held on Wednesday.

    Echo Show 8 smart display launch event started with the launch of a new Echo Show 8 smart display. It is equipped with spatial audio support and smart home hub functionality. It can understand the acoustics of a room and adjust the sound accordingly. It has a new proximity sensor and front facing camera. A physical button to turn off the microphone has also been added to the new Echo Show 8. Its pre-orders have started in America. The price is $149 (approximately Rs 12,377). Amazon Alexa now has the power of Generative AI. Amazon Alexa now has the power of Generative AI. This feature will come in all eco-enabled devices. The company says that its generative AI model has been designed and optimized for voice. With its help, people will be able to control their smart home products in a better way. They will get real time information. Because of this the conversational experience will also be better. To enable third party developers to connect their LLM with Alexa, the company will open its API. Based on user interactions, Alexa will also send them personal reminders. Thanks to AI generative, Alexa will be able to answer many questions on one command. American customers will soon get to experience the preview of AI Generative for free. The company has not said anything about its availability in other countries. New Fire TV Stick models, Fire TV Soundbar At the launch event, the company launched the next gen Fire TV Stick models. These include Fire TV Stick 4K and Fire TV Stick 4K Max. Fire TV Soundbar was also introduced. The processor in the new Fire TV Stick has been upgraded. It is claimed that they perform faster than the previous model. Fire TV Stick 4K supports Dolby Vision, Wi-Fi 6, HDR10, and HDR10 Plus. Whereas, Fire TV Stick 4K Max has 16GB storage and Wi-Fi 6E support. Fire TV Stick 4K Max is Amazon’s first streaming media player, which will give an opportunity to experience Fire TV ambient. The Fire TV Stick 4K is priced at $49.99 (roughly Rs. 4,152) and the Fire TV Stick 4K Max is priced at $59.99 (roughly Rs. 4,984). The company also introduced new Fire TV soundbars. It supports Bluetooth connectivity. Its price in America is $ 119.99 (approximately Rs 9,970). The company said that its generative AI is also coming to Fire TV. The company also launched new Amazon Echo Frames at the event. It is claimed that they offer 6 hours of battery life. Its price starts from $269.99 (approximately Rs 22,434).