Insight from James Garner, Project Flux
OpenAI’s latest memory upgrade transforms AI assistants from disposable tools into permanent knowledge repositories. The governance implications for project teams are significant.
ChatGPT received a significant upgrade this month. Plus and Pro users can now ask about conversations from a year ago and receive direct links to the original chats.
What was once a disposable question-and-answer interface has become a searchable, persistent knowledge system.
For individual users asking about recipes or workout routines, this is a convenience upgrade. For project professionals discussing risk registers, commercial positions and stakeholder strategies, the implications are substantially more complex.
What the Memory Upgrade Actually Does
Sam Altman, OpenAI CEO, highlighted this ChatGPT memory upgrade on X (formerly Twitter):
“We have greatly improved memory in ChatGPT – it can now reference all your past conversations! This is a surprisingly great feature imo, and it points at something we are excited about: AI systems that get to know you over your life and become extremely useful and personalised.” —Sam Altman, OpenAI CEO
Previously, asking ChatGPT about conversations from a year ago produced vague suggestions to check browser history. Now, the system can find exactly what was discussed and when.
Memory has become comprehensive rather than fragmentary.
The practical impact is significant. As one technology reviewer noted, you can ask ChatGPT about a recipe you requested a year ago, and there is a good chance it will find it, complete with a link back to the exact conversation.
Chats no longer feel disposable.
This capability builds on memory improvements OpenAI has been rolling out since 2024. The system now works in two ways: saved memories that users explicitly request and chat history insights that ChatGPT gathers automatically from past conversations to improve future ones.
The combination creates a comprehensive knowledge base about each user.
The Governance Gap
This capability creates substantial challenges for organisations that have not established clear AI usage policies.
Consider what project professionals routinely discuss with AI assistants: design decisions, risk discussions, stakeholder concerns, commercial sensitivities, programme assumptions, cost estimates.
Information that previously vanished into the ether now persists. A conversation about a difficult client twelve months ago can be retrieved. Budget assumptions shared casually with an AI assistant remain accessible. The informal nature of chat interfaces has not changed, but the permanence of the content has transformed entirely.
The security implications are significant. According to recent research, sensitive data now makes up approximately 35% of employee ChatGPT inputs, rising dramatically from 11% in 2023. The types of data being shared have expanded to include proprietary business information, client details and strategic documents.
This signals a critical wake-up call for project governance.
Key Concerns:
- Data Retention Risks: Commercially sensitive information stored indefinitely in ChatGPT’s memory creates permanent exposure risks.
- GDPR Compliance Clash: Persistent AI memory conflicts with data minimization principles and right-to-erasure requirements under GDPR.
- Client Confidentiality: Breaches contractual obligations when confidential project details become retrievable across sessions.
- Legal Discovery Impact: AI conversation logs become discoverable records, complicating privilege claims and e-discovery processes.
OpenAI’s privacy documentation notes that content shared with ChatGPT may be used to improve their models unless users opt out.
Memory evolves with interactions and is not linked to specific conversations. Deleting a chat does not erase its memories. These technical details have significant implications for professional practice.
The Construction Industry Problem
Construction has a tendency to adopt consumer AI tools without proper governance frameworks. Professionals use ChatGPT because it is convenient, not because their organisation has approved it.
The boundary between personal productivity tool and professional knowledge repository blurs without anyone making a conscious decision.
Security experts are increasingly concerned about this pattern. As one comprehensive security guide notes, doctors, lawyers, financial advisors, and others in high-risk professions should never input client or patient data into public AI tools.
For healthcare, this violates HIPAA with potential fines and licence risk. For legal professionals, it breaches attorney-client privilege. The same principles apply to construction professionals handling commercially sensitive project information.
The memory upgrade amplifies this problem. A project manager discussing multiple schemes across a year creates a substantial knowledge base about their work, their clients and their organisation’s approaches.
That information now lives permanently in a system outside organisational control.
OpenAI allows users to turn off memory features and delete specific memories. But how many professionals actively manage these settings?
The default condition is comprehensive memory, not selective retention. Most users will never change their settings, which means most users are building persistent AI knowledge bases without realising it.
The Opportunity and the Risk
The convenience of AI assistants with year-long memory is genuine. Imagine an AI that remembers every design decision, risk discussion and stakeholder concern across a multi-year programme. The productivity potential is significant.
For complex, long-duration projects, this capability could transform how project managers use AI assistance. Instead of re-explaining context in every conversation, project professionals could build on accumulated understanding.
Questions could reference previous discussions. AI recommendations could account for decisions made months earlier.
But capturing that value requires governance that most organisations lack. Project professionals need training not just on AI capabilities but on professional boundaries and data protection obligations.
Organisations must establish clear protocols about what information can be shared with AI assistants, how memory features should be managed, and when data must be purged.
What Organisations Must Do Now
This is not about refusing to use AI tools. That ship has sailed. This is about using them in ways that do not create liability exposure or compromise professional obligations.
Immediate steps for project organisations:
- Audit current AI usage. Understand which tools professionals are using and what information they are sharing. Shadow AI creates unmanaged risk.
- Establish clear policies. Define what categories of information can and cannot be discussed with AI assistants. Commercial terms, client identities and strategic positions may require protection.
- Train professionals. Ensure project teams understand memory features, privacy settings and professional obligations. Convenience should not override confidentiality.
- Review contracts. Check whether existing agreements address AI tool usage, data sharing and confidentiality in the context of persistent AI memory.
- Consider enterprise alternatives. Business and Enterprise ChatGPT plans have different data handling. Organisations with significant AI usage should evaluate whether consumer tools are appropriate.
The Bottom Line
ChatGPT’s extended memory represents a significant shift in AI utility for project professionals. The capability to maintain yearlong conversational context could transform how project managers use AI for complex, long-duration projects.
The question is whether organisations develop the governance frameworks to capture this value safely or continue the current pattern of informal adoption with unclear boundaries.
The memory upgrade has raised the stakes.
The industry’s response will determine whether AI assistants become genuine professional tools or liability time bombs.
The construction sector’s tendency to adopt consumer technology without proper governance creates particular vulnerability. Project professionals discussing commercially sensitive matters through AI interfaces that now remember everything should pause and consider the implications.
AI governance for project professionals is not optional. It is essential. Subscribe to Project Flux for practical guidance on navigating technology transformation in the built environment.
All content reflects Project Flux’s personal views and is not intended as professional advice or to represent any organisation.













