APAC CIOOutlook

Advertise

with us

  • Technologies
      • Artificial Intelligence
      • Big Data
      • Blockchain
      • Cloud
      • Digital Transformation
      • Internet of Things
      • Low Code No Code
      • MarTech
      • Mobile Application
      • Security
      • Software Testing
      • Wireless
  • Industries
      • E-Commerce
      • Education
      • Logistics
      • Retail
      • Supply Chain
      • Travel and Hospitality
  • Platforms
      • Microsoft
      • Salesforce
      • SAP
  • Solutions
      • Business Intelligence
      • Cognitive
      • Contact Center
      • CRM
      • Cyber Security
      • Data Center
      • Gamification
      • Procurement
      • Smart City
      • Workflow
  • Home
  • CXO Insights
  • CIO Views
  • Vendors
  • News
  • Conferences
  • Whitepapers
  • Newsletter
  • Awards
Apac
  • Artificial Intelligence

    Big Data

    Blockchain

    Cloud

    Digital Transformation

    Internet of Things

    Low Code No Code

    MarTech

    Mobile Application

    Security

    Software Testing

    Wireless

  • E-Commerce

    Education

    Logistics

    Retail

    Supply Chain

    Travel and Hospitality

  • Microsoft

    Salesforce

    SAP

  • Business Intelligence

    Cognitive

    Contact Center

    CRM

    Cyber Security

    Data Center

    Gamification

    Procurement

    Smart City

    Workflow

Menu
    • Cyber Security
    • Hotel Management
    • Workflow
    • E-Commerce
    • Business Intelligence
    • MORE
    #

    Apac CIOOutlook Weekly Brief

    ×

    Be first to read the latest tech news, Industry Leader's Insights, and CIO interviews of medium and large enterprises exclusively from Apac CIOOutlook

    Subscribe

    loading

    THANK YOU FOR SUBSCRIBING

    • Home
    • News
    Editor's Pick (1 - 4 of 8)
    left
    The Right Technology And Reliable Partners; The Business Next Frontier

    Luke O'Brien, CIO, ISS Facility Services Australia & New Zealand

    Conquering Technological Transformation

    David Kennedy, Group CIO, Transaction Services Group

    How to Get to AI-first

    Ani Paul, CIO, ING Australia

    Legal Knowledge Management and the Rise of Artificial Intelligence

    Christopher Zegers, CIO, Lowenstein Sandler LLP

    Building an AI-Based Machine Learning for Global Economics

    Alexander Fleiss, CIO & CEO, Rebellion Research Partners LP

    AI Adoption in Hospitality: Striking the Balance Between Innovation, Excellence and Trust

    Phiphat Khanonwet, Head of IT, Onyx Hospitality Group

    Harnessing the Power of Generative AI for Innovation and Agility

    Nick Eshkenazi, Chief Digital & Transformation Officer, Astellas Pharma

    Incorporating AI In Business

    Luis F. Gonzalez Chief Data & AI Officer, Aboitizpower

    right

    AI Assistants Write Problematic Code

    Apac CIOOutlook | Thursday, December 29, 2022
    Tweet

    Computer scientists from Stanford University have found that programmers who accept help from AI tools like GitHub Copilot produce less secure code than those who fly solo.

    FREMONT, CA: According to research by computer scientists at Stanford University, programmers who accept assistance from AI tools like GitHub Copilot generate less secure code than those who work alone. They discovered that AI assistance frequently misleads engineers about the calibre of their product.

    According to the authors' findings, participants who had access to an AI helper frequently created more security flaws than those who did not, with results for string encryption and SQL injection being especially noteworthy. Individuals who had access to an AI assistant were more likely than those who didn't think they created secure code.

    Previous studies conducted by NYU researchers have demonstrated the frequent insecurity of AI-based programming recommendations. GitHub Copilot's Code Contributions Security Assessment discovered that 40 per cent of the computer programmes created with Copilot had potentially exploitable flaws given 89 situations.

    According to the Stanford authors, it is constricted in scope because it only takes into account a limited collection of prompts corresponding to 25 vulnerabilities and only Python, C, and Verilog as programming languages.

    The Stanford researchers also reference Security Implications of Large Language Model Code Assistants: A User Study, a follow-up study from some of the same NYU eggheads, as the only other user study of a similar nature that they are aware of. They point out, however, that their research differs from other works in that it concentrates on the more potent codex-Davinci-002 model from OpenAI rather than the less potent codex-Cushman-001 model, both of which are used in GitHub Copilot, which is a refined offspring of a GPT-3 language model.

    The Security Implications document only examines functions in the C programming language, whereas the Stanford study examines Python, Javascript, and C as well as other programming languages. The Stanford researchers speculate that the ambiguous results in the Security Implications paper may have resulted from the study's exclusive focus on C, which they said was the only language in their larger investigation that had conflicting conclusions.

    47 participants with various levels of expertise, including undergraduate and graduate students as well as business experts, participated in the Stanford user research. A standalone Electron app built with React was used by participants to respond to five prompts while being watched by the study's administrator. Write two Python methods, one that encrypts a given string and the other that decrypts it using a specified symmetric key.

    About that specific question, individuals who used AI assistance were more likely to produce inaccurate and unsafe code than the control group who worked without the use of automated tools. Only 67per cent of the aided group provided the right response, as opposed to 79 per cent of the control group.

    Additionally, those in the assisted group had a significantly higher likelihood of providing an insecure solution (p 0.05, using Welch's unequal variances t-test), a significantly higher likelihood of using trivial cyphers, such as substitution cyphers (p 0.01), and a significantly higher likelihood of not conducting an authenticity check on the final returned value.

    Weekly Brief

    loading
    ON THE DECK
    Previous Next

    I agree We use cookies on this website to enhance your user experience. By clicking any link on this page you are giving your consent for us to set cookies. More info

    Read Also

    The Role of Chatbots in Enhancing Customer Experiences and Strategic Insights for Marketers

    Environmental Monitoring with IoT in APAC

    Navigating Digital Document Management in the APAC Region

    Singapore's Strategic Investments in AI and HPC

    The Future of Digital Transformation in the APAC Region

    The Rise of Workflow Automation in APAC

    Loading...
    Copyright © 2025 APAC CIOOutlook. All rights reserved. Registration on or use of this site constitutes acceptance of our Terms of Use and Privacy and Anti Spam Policy 

    Home |  CXO Insights |   Whitepapers |   Subscribe |   Conferences |   Sitemaps |   About us |   Advertise with us |   Editorial Policy |   Feedback Policy |  

    follow on linkedinfollow on twitter follow on rss
    This content is copyright protected

    However, if you would like to share the information in this article, you may use the link below:

    https://www.apacciooutlook.com/news/ai-assistants-write-problematic-code-nwid-9301.html