AI Ethics, Risks, Societal Impact & Future

We are seeing growth in AI on a very fast scale and for common people a lot of questions remain a cause of worry starting from AI layoffs and AI replacing humans to Dark side of AI, AI Ethics, Risks, Societal Impact & Future

This page tries to answer a lot of questions, asks more questions and also gives a picture of the current state of AI in the world.

  1. AI GROWTH
  2. AI BEHAVIOUR
  3. LAW AND AI
  4. AI TOOLS
  5. AI AND DATA

Personally, we have used AI for internal use: We have also used GenAI to find answers quicker, write code faster and also for marketing. Once you get the hang of what kind of prompt to use for the work you have at hand, your time and effort saved on doing the task literally becomes halved.

IT strategy that helped shape these deployments and contributed to their success:

One of the most important concerns for us was privacy while we were trying to implement genAI internally. We solved this problem by using Llama open-source model from Meta in offline mode for solutions that required customer data or sensitive data to be used which we did not want to share with online LLMs like Gemini, ChatGPT or Grok.

Technical aspects and any challenges that came up along the way:

There were few challenges which came our way:

  1. For AI App development: On our very first AI app development program we had many questions like which model to select in terms of accuracy along with the important question of “how much it would cost us”. While testing various APIs for our App development we noted that OpenAI gave better results initially, but with new versions and also keeping in mind the pricing, Gemini came on top. We had to make the shift from OpenAI to Gemini. This wasn’t as difficult as thought because the core of the program had the same code. Only API key, references and few prompts had to be adjusted to make the shift.
  2. For Website design: We also started using generative AI in website design projects, but the lesson learnt was that you at least need a half expert to be able to design, re-design and rewrite from the initial output using AI. If you let a junior level designer or intern do the task for you then the time required to design and code in web development using AI would come to the same amount of time a designer would take writing code on his own and with the help of a search engine.

1. AI GROWTH

AI Growth

A. YOU MIGHT NOT REALIZE BUT AI WAS ALREADY IN HERE

  1. AI is everywhere, and people might be using it unintentionally without realizing that their typical phone/computer features are actually AI. Can you give 5-7 examples of seemingly hidden AI tools on your phone (for example, voicemail translation, copying and pasting text from images, visual search on Photos app).
  2. Visual Search in Photo Apps: When you search for an object or face in the Photos app using facial recognition technology, it uses AI to identify and tag those images.
  3. Image Captioning: When you upload an image to your favourite social media platforms like Twitter(X) or Instagram, AI-driven captioning technology generates caption and description based on the content of the image that you’ve uploaded. This feature has been used for a long time but is totally based on Computer Vision part of AI.
  4. Facial Recognition in mobile phones: Your phone’s camera uses AI-powered facial recognition technology to recognize your face and unlock the phone for you. We are so used to it but never realized that this is an AI feature.
  5. Suggesting Content For You: AI-driven recommendations for videos, music, podcasts, or movies on your OTT are shown to you on the basis of your history, is completely based on algorithm which is AI-powered.
  6. Autocomplete: When you’re making a search on your favourite search engine or typing a message, AI-driven autocomplete feature will suggest words and phrases based on your previously typed searches.
  7. Gestures: There are few tasks which your phone does based on your movement and your gesture. This is completely based on AI. Example, your phone taking your selfie as soon as you smile, or your phone staying awake (as opposed to dimming and going off to sleep) when you’re reading something on it.
  8. SMS: Your phone’s SMS service will keep showing you the short messages that you have received. The AI feature helps you stay away from any spam or unwanted links resulting in you being safe from smishing or phishing.

Keeping yourself safe and alert is also an important point to remember when dealing with AI:

  1. Most of the AI companies use or have used your data to train their LLMs. Do not provide your sensitive company information or private photos while you’re using any of the AI tools.
  2. If AI is used on a regular basis, then your over reliance on AI will hamper your creativity, and your ability to think will lead to a point where you will be unable to make even the simplest of decisions by yourself.
  3. Voice Cloning: Your iPhone can clone your voice. There are various other AI tools on the internet which can do the same. This can be helpful for some but can also be abused if it lands in the hands of a hacker/criminal.
  4. Is there anything people should know/avoid now that they understand the tech features with AI built in? They should avoid sharing client’s data or sensitive information with AI tools.
  5. Disable your virtual assistant if you are not using it. Know that your assistant (Example Siri or Google Assistant) is always listening to you. Although it is there to help you out with your queries and tasks, the important point is that it is ALWAYS LISTENING to you.

B. AI GETTING IN ON EVERYTHING:

Even in films. The Brutalist’s use of Artificial Intelligence is not the first in Hollywood, nor is it going to be the last.

Lot of film makers have used generative AI to enhance their films, whether it is an actor’s speech, scriptwriting, enhancing visuals or to produce fine quality music. As AI advances, we are going to see more and more films incorporating it in their projects. Tech has been part of the film industry for long, and it will continue to do so but use of AI is a totally different ball game. As more and more content is put up online, content makers are required to mark their content as AI-generated or AI-enhanced, the same way film makers will have to be transparent about the use of AI in their projects. This will help the jury and critics to rate the film accordingly.

For the average movie goer this will not make a difference as they are there just for the entertainment. They did not care whether stunts were done in real or on the green screen, the same way it will not matter to them if there’s been use of AI in the film or not; they go to the cinemas just to get dazzled.

C. DOES AI ONLY MEAN CHATGPT/GEMINI?

  1. Some analysts have compared DeepSeek’s potential impact to ‘TikTok on steroids’ in terms of data influence and national security risks. Given China’s AI ambitions, what are the biggest security concerns the U.S. and its allies should consider?

Their privacy policy [1] states that they can use the data by the end user in any way they like (whether it is a text prompt or image or voice data). This is a serious privacy risk. You can however download and use their opensource model in offline mode if you are worried about privacy and security.

  1. If DeepSeek gains significant global adoption, how might it impact geopolitical competition in AI? Does this raise concerns similar to those that led to efforts to ban TikTok?

If they see a threat of any sort (including privacy and security) then they can ban DeepSeek as well.

  1. What mechanisms, if any, could be used to regulate or restrict China-based AI tools like DeepSeek without stifling broader AI innovation?

Joint ownership or a takeover by someone like Elon Musk can help both the countries along with tech companies all over the world.

  1. Reports suggest DeepSeek can achieve significant AI capabilities at a fraction of the cost compared to Western models. How might this reshape the AI arms race, particularly regarding hardware investments?

It is great news for firms knowing that such a cost-effective AI model is out there. More research and investment need to be done in this area where the cost of hardware and the cost for running the LLM is not too high.

  1. Does DeepSeek’s efficiency pose a serious challenge to the current AI development trajectory, which relies on increasingly powerful and expensive hardware?

“Excellent AI Advancement” is a comment which comes from Nvidia regarding DeepSeek. DeepSeek ran into trouble for 2 days in a row where they were not taking new users and server showed busy to prompts, meaning more efficient hardware is definitely required to run AI model for the world.

  1. Could cost-effective AI models like DeepSeek force companies like OpenAI, Google, and Anthropic to rethink their approach to model scaling and infrastructure spending?

They will certainly have to think about offering cheaper models, but it doesn’t look like they will go opensource.

  1. Is DeepSeek a true paradigm shift in AI development, or is it more likely to be a short-lived competitor unable to keep up with Western research advancements?

Healthy competition will be beneficial for the end users as they might see price drop in other top models. This will also push OpenAI, Meta & Google to offer a better AI version than what they are already offering.

  1. What key indicators will determine whether DeepSeek can sustain its momentum or if it will fade due to regulatory, market, or technological limitations?

There have been few complaints in the last 24 hours by users stating that they got a busy response from DeepSeek. DeepSeek will have to fix this issue if they want them to be taken seriously by the world.

  1. Will DeepSeek prompt a stronger push among China hawks in Washington to advocate for AI bans or restrictions on Chinese AI services?

Healthy competition is always good. They will need to have a serious reason to have restrictions or ban DeepSeek. Even top US companies like Meta, Google and OpenAI have all received warnings as far as privacy is concerned or how they have trained their data using content from the user without any consent.

  1. Given the Biden administration’s existing AI policy moves, could we see a regulatory framework that specifically addresses foreign AI models like DeepSeek?

There is no harm in using AI models if they are open source. There definitely needs to be more regulatory framework keeping AI in mind. There also should be XAI (Explainable AI) & RAI (Responsible AI) inbuilt in all LLM models.

  1. DeepSeek is reportedly more energy-efficient than some Western AI models. How significant is this advantage, and could it influence future AI infrastructure development?

The main advantage one can see is in the pricing.

If you compare the free models like Llama and DeepSeek, then DeepSeek has an edge over Llama in terms of reasoning.

  1. As AI companies face growing scrutiny over environmental impact, could DeepSeek’s efficiency model push the industry toward more sustainable AI?

If an AI model is cost efficient, not heavy on the resources and also does NOT use so much of resources as other LLM models then that is the right way to go. US companies will have to take a lesson from this otherwise we will run out of electricity or water in the aim of building and running servers for LLMs.

[1] DeepSeek Privacy Policy

D. AI IN EDUCATION

  1. A) How has AI in American education evolved since ChatGPT broke onto the scene in recently?

AI has been used in a lot of schools for learning, monitoring and security purposes.

Learning: Children are having access to new LLMs and tools to expand their knowledge whereas teachers are learning about new tools to help bring out the best in a student’s performance.

Monitoring: Management is much easier and more accurate if monitoring is done on performance as well as behaviour.

Security: Patrolling robots have been tried out at various schools for intruder detection.

  1. B) What do K-12 schools need to do differently to prepare students for an AI-dominated workplace?

An important step K-12 schools need to do differently is to give hands on training and early access to tools that are going to be part and parcel of their daily work life. Having practical experience about something is completely different compared to having just theoretical knowledge about a subject.

  1. C) What are best practices for using AI in the classroom?

Best practices for using AI in the classroom:

Make sure the LLM is trained and tested well. An untrained LLM or AI trained on synthetic data can be sexist, racist and can hallucinate a lot.

  1. D) What AI-related skills are employers seeking?

AI-related skill employers are seeking:

  1. Prompt engineers
  2. Machine learning developers
  3. Python programmers
  4. Cyber security personnel for AI services running on cloud.
  5. Data labellers / Data Annotators for the purpose of training LLM.

Three good things for teachers to do with AI:

  1. Check student’s work assignment for plagiarism.
  2. Analyse student growth by using predictive analysis.
  3. Monitor child behaviour and participation.

Three things for teachers to avoid:

  1. Share students’ media or sensitive information with AI companies.
  2. Grade a student’s performance with the help of AI.
  3. Sharing important information generated through AI without doing a fact-check.

Any other examples, data, or insights to share on this topic: Some schools have also used AI for weapons identification, helping them make school a much safer place.

E. AI + QUANTUM COMPUTING

AI has exploded on to the scene out of nowhere and most of us still do not understand the core basics of AI. Sam Altman himself has admitted last year that even OpenAI does not fully understand how GPT functions [2].

MATCH MADE IN HEAVEN: You take the power of AI and add quantum computing to it and we have something which is faster, more intelligent and basically far superior to what a human mind can think of accomplishing.

Once all the systems are connected and AI-enabled then ASI could not only take decisions autonomously, but also override human instructions. This is why there should be an independent body to control and put regulations as far as safety is concerned because with the power of big tech and no type of government regulations things could go out of hand faster than we have previously thought.

[2] Sam Altman Says OpenAI Doesn’t Fully Understand How ChatGPT Works | Observer

F. IS AI BUBBLE ABOUT TO BURST?

When the CEO of one of the biggest AI firms talks about AI being a bubble and one of the biggest investment firms tells its investors to cool down on AI stocks then you know there are high chances that the AI bubble could burst.

The AI bubble worry is justifiable to an extent and could mainly be because of AI washing, and investors who expected AI to work like a magic wand. It could hit a lot of people in the short run, but AI (along with AGI & Robotics) is here to stay, and it is totally going to change the way we work and how we live in the coming years.

Many experts are fearing the bubble burst could be similar to the dot com bubble burst that had happened at the start of the decade. Big techs are planning to pump in hundreds of billions of dollars in AI and data centres for AI and since there is no immediate ROI seen, the question marks and fear can be felt by the investors who are skeptical about the investment and AI in general. The fear also came from the fact that NVIDIA stock showed decline in August 2025.

There may be some correction in 2026, but I personally believe that similar to the dot com bubble burst, the overspending will ease, overvalued companies will get weeded out, and the major firms that form the core of AI will thrive because AI is the future.

G. AI REPLACING HUMANS

Can AI engineers really replace software engineers?

The only question is when. AI engineers will eventually replace software engineers to a large scale but as of this moment it lacks the capability to do so. With the help of AI agents an AI engineer could take away most of the work for an entry level software engineer, but it lacks the vision and creativity that a human possesses. AI is a helpful assistant and will remain to be that for some time to come but in the long run it will definitely replace junior and mid-level software engineers with the feature of adapting and self-learning from its mistakes.

H. AI AND LAYOFFS

      1. Are companies disguising AI-related layoffs with corporate euphemisms like “operational efficiency,” “restructuring,” “performance-based cuts,” and “business optimization.”

It would have been easier to figure out if companies were disguising it because then they would also be hiring AI developers, prompt engineers or investing in AI tools and platforms.

There is a shift in way work is being done on a global scale with remote working and adaptation of AI. Automation and robotics in the 80s could be a similar example which led to job cuts for factory workers, but this time around AI & automation is coming for your white-collar and blue-collar jobs combined.

      1. Are companies avoiding explicit acknowledgment of AI’s role in job displacement while simultaneously investing billions in AI technology and achieving strong financial results?

There could be other reasons for the job cuts as well. Reason which companies will not acknowledge out in the open:

      1. A) Over hiring post pandemic.
      2. B) Because of economic conditions such as inflation.

Does this pattern suggest a deliberate strategy to minimize public backlash and employee concerns about AI replacement?

There is a possibility that this is being done because they are fearing negative publicity from social media, but then again job cuts would see backslash on social media no matter what.

2. AI BEHAVIOUR

AI Behavior

A. AI SCAMS

Popular scams involved using AI:

  1. Dating Apps: A lot of profiles on dating apps are either AI or using AI technology to fool, or scam and extort money out of people who are looking for love online. 60% to 77% of people who are using dating apps have encountered AI profiles while swiping left and right. Have a look at comments from Jimmy Thakkar on Newsweek regarding this.
  2. Deepfakes: With the advancement of AI, it is very easy for cyber criminals to create deepfakes; combine then with voice cloning and it is almost impossible to detect any wrongdoings.
  3. Voice cloning: Scammers use this technique to clone voice and extract money out of innocent people.
  4. Phishing and fake websites: Scammers create ecommerce websites using AI in minutes and fool buyers into handing over their credit card details by offering huge deals or creating Jobs or investment portals to extract money from naïve people who are in need of a job or looking for lucrative investment deals online.
  5. Which AI scams are most common in travel right now?

Most common travel scams right now are getting travelers to make bookings on a site that is fake and created by AI. The content, images, packages and even the reviews are created using AI. They follow up to make it look genuine but they vanish once they have extracted enough money from you. Even if the site is shut down, creating a new website for these scamsters using AI is a piece of cake.

Personalized phishing emails created using AI is the most common travel scam (targeted towards seniors) right now because of how easy it is to create a personalized phishing email using AI and fool seniors who are not so tech-savvy:

      • Your details can easily be grabbed from social media
      • Logo and marketing email of booking company can easily be replicated using AI
      • Generative AI can take a matter of seconds in generating the email

What red flags should consumers watch for?

– If a website with no SSL certificate installed (http) asks you for your payment details.

– If an unknown caller asks you for your credit card CVV.

– If the deal sounds too good to be true.

– If someone close to you contacted you on video call and you see unnatural eye movements or poor lip sync then it could be deepfake. Do not share confidential information without being 100% sure.

What should you do if you are fooled.

      1. You should contact the police or reach out to your embassy if this happens to you while you are travelling.
      2. Contact the bank if your financial details are compromised.

B. WHY AI IS HIDING SOMETHING FROM YOU

The reason why some AI models decide to conceal information from users:

      1. Censorship: Moderation and content guidelines are an important part of an AI model. Censorship may be required if the model is trained on synthetic data. A recent example of this is when Elon Musk’s team had to fix the problematic code in Grok which was behaving antisemitic and pro-Hitler. This sort of “fixing the code” or censorship can also be the cause of AI concealing information. The continuous training of the models results in more accurate answers and less hallucinations.
      2. Legality: The AI model is trained and tweaked to conceal information due to legal reasons as well. Example, information related to national security, hacking or how to rob a bank. However, you can find many open-source models from sites like Huggingface which offer the uncensored version as well.
      3. Training: The model has been trained in the manner. Example, copyrighted material and information concealed for personal data and privacy reasons.
      4. Harm: If the AI agent or model detects that the prompt may cause self-harm or puts the lives of general public in danger then it may conceal such information. Example making weapons, buying drugs or committing suicide.

As various laws and regulations are formed, things will become more transparent as to what information is concealed when XAI & RAI (Explainable AI & Responsible AI) form the core of the policies.

C. AI AND SEXUALITY

– How AI is impacting sexuality — specifically, how AI tools (like chatbots, image generators, and erotic role-play platforms) are being used to explore fetishes.

AI models that run as chatbots can be given a role as per preference and they can simulate intimate conversations allowing end users to experiment with different fantasies and scenarios according to their liking, within the comfort of their bedroom. Although major image generators have various restrictions on types of images you can generate, there are various models which have no restrictions or censorship on NSFW images once you declare you are over 18 years of age.

      1. How AI may be shaping new fetishes or fantasies

You can try out scenarios related to your liking or any other roleplay fantasies that you can think of because AI can generate content at will. This is the best way to explore sexual fetishes and fantasies without any worry about search engines or cookies or societal pressure.

      1. Whether AI use can reduce (or increase) shame around certain desires

You can use AI in offline mode which means you won’t be sharing any data with anyone. No fear of any judgement can let people explore their fantasies to the maximum.

      1. The psychological effects of engaging with AI-driven fetish exploration

Exploring sexual fantasies with a sense of anonymity can help an individual run wild with their experiment, even learning more about themselves or their deep desires which they did not know they had earlier. Like all good things, there should be a limitation as abuse of the service could have a psychological impact on the user’s mental health.

      1. Ethical and cultural implications of AI in intimate contexts

All AI platforms must follow a degree of XAI and RAI (Explainable AI and responsible AI) in order to have a proper environment for all. As far as cultural implications are concerned, there could be a backslash from certain groups based on the norms and taboos.

D. WHEN AI GOES ROGUE

1.) In a now-famous June study released by Anthropic [3], 16 different LLMs were “stress tested” under different, fictitious scenarios and were observed to be deceptive, willing to lie, or even blackmail operators. This “agentic misalignment” has prompted growing concern among the public. Do you think there is a legitimate cause for concern?

As of this moment there is a legitimate cause of concern if this kind of a response from AI agents is a common occurrence. Solution: A thorough test should be done by independent bodies before putting AI agents in critical areas.

2.) Have you experienced anything like “agentic misalignment” in your personal work with AI? If so, could you please elaborate?

Not with agentic misalignment, but lot of hallucinations experienced with LLMs which has caused embarrassment to me and my agency.

3.) The phenomenon of “alignment faking” is also something researchers have observed with a surprising frequency. Have you witnessed this personally in your work with AI?

Have not experienced alignment faking. Such kind of deceptive behaviour which can affect the outcome of the project should be categorised as AI Trojan and needs to be removed immediately.

4.) From a developer standpoint, are these observations of AI “agentic misalignment” or “alignment faking” a slippery slope toward more malicious behaviour? Or is this just a program responding to the data it was trained on?

It is just a program responding to data it was trained on, but we will still need years of training and beta testing before we can incorporate an AI agent which will be used in areas like banking or personal safety.

5.) These studies have all taken place in fictitious and highly improbable scenarios, but are there larger lessons here worth considering? If so, please explain.

Even before the Anthropic test, there have been tests to find out how LLMs would react under pressure, and in war situations some LLMs have chosen to escalate and go nuclear. This kind of a behaviour can have catastrophic results including loss of human life on a large scale. Until and unless we have AI governance with strict regulations and policies safeguarding the interest of people first, we are bound to see AI being more harmful than helpful in the long run.

[3] Agentic Misalignment: How LLMs could be insider threats \ Anthropic

E. WHEN AI BOTS SHOW FEELINGS

In this fast-paced life where everyone is busy with their monotonous routine, people can get lonely, getting support from AI is nothing wrong as having a comforting voice which can act as a friend or guide can be helpful for someone who is feeling lonely. Though it can be comforting to hear a soothing sound that can talk to you in your difficult times, there are many concerns as far as using AI is concerned which includes your privacy; your privacy is as risk and everything you have communicated with AI could be further stored or be used to train AI models.

Do AI bots have a good judge of character?

As AI keeps evolving, it will be as close to a human as possible; until that time comes, we cannot rely on the ability of AI fully as there have been instances in the past which have caused distress to humans:

1) Sexist: AI has responded in a sexist manner on numerous occasions. Let’s say you’re a woman who is sad because she cannot find a job after she has been fired, and the AI responded in the negative stating that yes it would be harder for you to get a job because you’re a woman then it could make her even more sad, even suicidal.

2) Racist: AI has been known to behave in a racist manner. AI bots will not be a good judge of character until and unless the models are tweaked and trained properly on genuine non synthetic data.

3) Empathy: AI can’t show genuine concern the way your loved ones can, they might not even understand your emotions properly. If your serious threat is treated as a joke by AI (because it does not have intuition of humans), then it could make things worse.

Subconsciously the users know and feel their emotions are being handled in a way that isn’t truly real. This could lead to more emotional issues and negatively impact the human because they’re not getting true support they would have got when they interacted with a close person or therapist.

F. AI AGENTS AT WORK?

Is this legit? Can the new agent deliver on the promises OpenAI is making for it?

Yes, it is and it isn’t a promise they are hoping to deliver on but something which is currently happening right now. Many top companies including Google have started incorporating AI agents into their product. The pace at which AI is moving, we will have hundreds of AI agents incorporated in to our daily lives making decisions for us autonomously.

      1. What are the risks of the system?

This product is coming from a company whose CEO just last year had stated that even their own company doesn’t fully understand how GPT works [1]. So, if you ask me, yes, it is risky. The plus point is that it has an interruption ability, and the system will reject any tasks that involve higher amounts of risk, example bank transfers.

It is at an early stage which means it is still at a learning phase (“capable but imperfect” is what ChatGPT admits).

3. LAW AND AI

Law & AI

The verdict against Meta will certainly put the tech companies on a leash. Personal and private data belonging to the users is not the property of tech companies and cannot be used by them. They have been abusing their power and exploiting user data and copyrighted material at will; this verdict will certainly put some brakes on Meta which had a ruling in their favour earlier this year:

Kadrey et al. v. Meta Platforms, Inc where Meta had trained its Llama model based on text extracted from books.

Other cases against tech companies training their AI models on public data or copyrighted material without permission:

      1. Bartz v. Anthropic (split decision) where court ruled that legally purchased books to train models was fair use but using pirated books was not.
      2. Reddit v. Anthropic (Filed June 2025) because Anthropic trained its model by scraping Reddit comments without permission or any licensing agreement.
      3. Authors Guild v. OpenAI (Filed 2023) and The New York Times v. OpenAI and Microsoft (Southern District of New York, Filed December 2023) – Both cases are going on and OpenAI has put in their defense that training AI based on available data is “fair use” / transformative.
      4. New York Times Sues A.I. Start-Up Perplexity Over Use of Copyrighted Work – The New York Times
      5. Google Vs SerpAPI: Legally, Google has a strong case here, not only because of the content extracted maliciously but because of the violation of their terms of service. Morally one would say Google is in no position to say anything as they themselves have scraped content from websites to display on their search engine. Moreover, they have also trained their AI model Gemini on copyrighted material. Here is what Google has to say.Will such cases slow down AI? This case only seems like a small speed breaker in the path of AI developers. Until and unless policies and regulations are put in place there is no stopping for AI firms. Such legal uncertainties will however create a lot of confusion in content creator’s mind thinking- as to who will benefit from such lawsuits? The end user? Tech firm? or the owner of the content?
        – as to what content of public can or can’t be used by AI companies to train the LLMs.What effect can it have (Google Vs SerpApi)?
        If scraping services are barred then a lot of people will either end up with synthetic content on their hand which is generated by AI, or the user will end up paying a lot more for the data they would have gathered by using services like SerpAPI.Who will win?
        Only time will tell, but a lot of push is coming from smaller firms, the content creators and artists of the world as AI companies blatantly use people’s original content to train their models.

4. AI TOOLS

AI Tools

A. TO MANAGE MONEY

      1. Can AI help you manage money?

Yes, it can, especially with budget and taxes. It can also help you track spending and suggest investment ideas.

Note: It is important that you do not share any sensitive data with AI.

      1. How do AI-powered financial Tools help?

Whether you’re a professional or a small business, AI can provide you financial guidance at every step of the way. AI can also detect any irregularities and alert you whenever it happens. This can help you in avoiding financial loss. AI is also great at making predictions by using predictive analysis. This can help you grow your investment.

      1. Can AI suggest investment strategies or budgets?

Yes, it can, but AI is as good as the data it is trained on, which is why it is necessary to use tools, AI agents and LLMs that do not rely on synthetic data.

      1. Best AI tool for stock is Yahoo finance. You can integrate it into your favourite GPT or combine it with agentic AI for custom results.
      2. If you’re looking for bookkeeping, then a popular AI tool is Booke.ai

B. AI AND VIDEO GENERATION

AI information is as good as the data it is trained on. If data contains synthetic information and misinformation, then it is highly likely to be amplified in the resulting posts and videos. With the number of videos being shared on social media, it is hard to sit and identity which ones are real, which ones created by individuals and which ones are created by AI. Even though social media users are getting smart, it is hard to detect misinformation if it comes from credible sources like a celebrity or a news agency handle. This is why it would be important to label all AI content, which will help the users know how the video was created.

How can Veo3 lead to misinformation?

So, if a video is created by VEO3 which is trained on data that has false information then it will end up creating a video fueling misinformation.

There needs to be proper fact checking and regulation or else the amount of misinformation and disinformation will multiply on a rapid scale in the future because of deepfakes and tools like VEO3.

C. AI FIN TOOLS AND PROMPTS

1) What is your AI tool of choice for prompting? What is your AI tool of choice for financial or business/side hustle decisions? Any other AI tools you recommend?

Latest Gemini by Google is best tool for prompting. DeepSeek is also a good alternative.

      1. A) For AI-powered bookkeeping use booke.ai tool.
      2. B) If you are looking for an AI-powered financial assistant for yourself then try Intuit Assist
      3. C) Using Yahoo finance is also a good option because it can be integrated with your favourite GPT or AI agent to get custom results.

2) What are some ways I can craft prompts to get the best response out of AI? Please provide some examples.

The best way for you to generate AI prompts is to create a passive income model which complements both your retirement plan as well as the side hustle.

Examples:

      1. A) You are a financial expert. I want you to plan my retirement for me.
      2. B) Please find my excel sheet attached. Evaluate the risk and return of my investments based on the current market scenario.
      3. C) You are a finance expert with 20 years’ experience, suggest various side hustle gigs for me [mention your area of interest or expertise here] that will generate passive income for me.

3) What should I keep in mind about AI as I use it to help me plan my retirement or start a side hustle? What should and shouldn’t I use it for?

You should use it to get suggestions, understand the rules, regulations and set up a framework but you should not use it to carry tasks autonomously for you.

4) When plugging in financial details to AI tools like ChatGPT (or any AI for that matter), is my data safe? What are ways I can ensure it’s safe (VPNs, etc)?

Sharing sensitive data with AI is always a no-no:

      1. A) AI companies are constantly training their LLM on user data without permission.
      2. B) Hackers are constantly looking to gain customer data through vulnerability. Even secure platforms like SAP have been found to be vulnerable (where the hacker could have gained access to customer data and code [4]).

5) Do you advise against using AI agents like Manus to perform actual financial tasks for you, such as making retirement contributions in an account or registering your business with the government? I can see some data privacy issues here.

The crucial difference between Generative AI and AI agents is that AI agents can think and take action on your behalf. This might be a good alternative to someone who is working on digital marketing but letting your AI agents handle your important financial tasks is not something you would want to take a risk with.

6) AI tools like ChatGPT often make mistakes and even flat-out lie. Should I always fact check every recommendation it gives me? If I do this, what is the point of using the tool in the first place? Any tips on how to make sure the information it gives me is as accurate as possible?

Hallucinations are a common problem with GPT. This can happen if the LLM is trained on incomplete, outdated or synthetic data. You can use a tool that checks multiple LLMs for a search term. For example, you could search for best AI tools for small business finance management, and the tool could give you results from ChatGPT, Gemini & Grok. If all three tools are recommending a common software, then you need not recheck on the reliability of the suggestion.

7) Any other tips for using AI to plan my retirement or start a side hustle?

There are a lot of tips that you can learn from fellow AI enthusiasts and experts if you are on the right platform. You can get good guidance on Twitter and Reddit along with daily tips on starting a side hustle using AI.

[4] SAPwned: SAP AI vulnerabilities expose customers’ cloud environments and private AI artifacts | Wiz Blog

D. BUSY? DO AI MEETINGS

People have been using this technique for a while, where you are not even in front of the screen (probably out somewhere doing your chores), but you are still talking to others in a meeting via Zoom or Google meet. This concept first went viral after the idea for Beulr was pitched on Shark Tank [5].

Another popular solution in the same category is Pickle. However, you do have to upload a 5-minute video of yourself so that you can train the model.

The good:

– It can be used by individuals to train themselves for interviews. Remember the times you stood in front of the mirror trying to train yourself to answer the simple question of “tell me about yourself”, just before your interview? Your future CEO’s body double AI can now help you master it.

The bad:

– There seems to be a lack of personal touch (personality) when a body double AI is present instead of a human. This can make a candidate more nervous.

– You still have to provide the voice/speech in most of such solutions.

– If bad elements of the society get hold of your video, they could use it to scam your employees, friends or relatives.

– Worst case scenario; The body double AI has thoroughly been trained on the entire subject therefore the hiring manager is no longer required, leading to the AI replacing him completely.

[5] Beulr Zoom Attendance Bot Update 2025 | Shark Tank Season 13

5. AI AND DATA

AI DATA

A. WHERE TO GET DATA FOR AI

      1. Where will AI model builders and enterprises dependent on public data (e.g. scientific data, climate data, economic data, etc) get the vast amounts of reliable external data that they need now?

In the world of AI and Internet, data is gold. Your LLM is as good as your data. If your AI model has been trained on synthetic data then you will see a lot of hallucinations and inaccurate results. This need for reliable and accurate data will open up jobs in the field of AI because this is an important step in training and building a successful model.

      1. How much might the costs for such data rise to?

The shortage will definitely see a rise in costs (basic economics). This will see a lot of such demands and jobs of data entry and data annotations outsourced to English-speaking third world countries like Nigeria and Vietnam, even India as well.

      1. Conversely, how can companies start cashing in on this growing data shortage now?

Data annotators can now charge premium for accurate data provided for machine learning. The companies can also charge depending on the importance or demand, example medical research data would be more expensive than data provided for language translation.

B. WHY THERE IS EMPHASIS OF IMPORTANCE ON DATA

CIOs and directors must make sure that the AI agencies they are working with, or data labelling work that is outsourced is reliable, accurate and does not involve copyrighted and pirated data as use of copyrighted material can result in huge fines. New policies will need to be drafted to safeguard their interest as this particular settlement sets a precedent that although use of some material may come under “fair use” but blatantly using pirated data to train LLMs or for enterprise AI will result in damages.

There is an urgent need of AI governance from the top level with focus on incorporating AI ethics and RAI (responsible AI) as a core of AI in the company, and IT teams will have to be trained in these areas before incorporating and launching AI solutions.

Tech companies are ensuring that any AI generated content is labelled accordingly. Similarly, CIOs can demand that data for LLMs has auditable trail which will lead to accountability.

CIOs can make sure contracts include piracy and copyright indemnification clause along with details on how data was created (as nobody wants a million-dollar law suit on them unnecessarily). If agencies are using copyrighted and pirated material for training the models, then those responsible for providing the content are held accountable in case of any legal issues. This will lead to transparency from model maker and also the fear of liability will ensure no copyrighted material is used in the AI models and application.

C. AI DATA PRIVACY AND SECURITY

1.) A recent report from Stanford University observed a 56.4% increase in AI incidents, including data breaches and compromised sensitive information, between 2023 and 2024. Researchers have also said that despite most organizations acknowledging the dangers AI poses to data security, fewer than two-thirds are actively implementing safeguards. What are your thoughts on this?

We have seen a lot of deepfake instances that have happened in the recent past. This has been a cause of concern for a lot of celebrities worldwide.

We have also seen vulnerability on a major AI platform which could have led the hacker to get to customer data and their code. This was found by Wiz on SAP platform, fortunately before any damage was done.

2.) In your estimation, what aspect of data privacy and security is most at risk of being compromised by AI?

The biggest threat with data privacy and security is if health and financial data are compromised. With more and more use of AI in healthcare and financial sector this is bound to happen in the future.

3.) What can the public do to ensure their data stays secure?

Do not share your private photos with AI.

Do not share sensitive work information with AI.

You can use AI in offline mode for these 2 matters.

Also, do not let AI models train on your voice.

4.) Are there current regulatory gaps that AI can (either intentionally through malicious human operators or unintentionally) exploit in terms of data privacy and security? If so, please explain.

There have been lawsuits or pending lawsuits on almost all major AI companies because they have used public data for training their LLMs. Whether it is books or Reddit comments or pirated books and social media account of users (check email dated 10th September). AI companies must inform the users as to how their data is collected and how it is used, all in simple terms and not hidden somewhere in their privacy policy.

5.) Where do you see the biggest future risk for data privacy, given how quickly AI models are improving and expanding in their capabilities?

Biggest future risk is leak of DNA and biometrics along with impersonation via AI or rogue elements by using video AI or voice cloning.

6.) Are my chat logs private?

This is a complex question, but the short answer is no. Your chats are used for training purpose by AI companies for their LLMs. Even if you opt out, your chats could still be used for legal purpose [6].

[6] OpenAI loses fight to keep ChatGPT logs secret in copyright case | Reuters

 

Page Updated: 24th January 2025

Page by: Ashish H Thakkar, Founder of Jimmy Thakkar, with 2 decades of experience in websites, software and SEO. He’s been featured in AOL, CIO, Forbes, Logo.com and is also a specialist in AI & ML. You can view Jimmy Thakkar’s AI development services here.

AI Growth
AI Behavior
Law & AI
AI Tools
AI DATA

Pin It on Pinterest

Shares