Agenda

8:00 – 8:50

Registration and breakfast

8:50 – 9:00

Chair’s opening remarks

9:00 - 9:45

REGULATION – PANEL DISCUSSION
Reviewing the AI regulatory landscape and how this may evolve

  • Navigating multiple regulatory bodies across jurisdictions
  • Correctly stating new technology use to regulators
    • Reviewing issues of false marketing with AI
  • Appropriately calibrating AI internally given the regulatory environment
  • Reviewing CFTC piece on AI
  • Understanding how much data can be shown without being penalized
  • Relying on SR117 to regulate AI models
    • Proving reliability and soundness of a model
    • No bias outputs
  • Reviewing the impact from the EU AI Act when selling to EU region

David Palmer,
Senior Supervisory Financial Analyst,
Federal Reserve Board

Sabeena Ahmend Liconte,
Head of Legal & Chief Compliance Officer,
ICBC Standard Bank Group

9:45 – 10:20

AI POLICY
Reviewing definitions around AI and developing policy and standardized framework based around this

  • Categorizing AI based on policy
    • Reviewing which teams own this
  • Identifying gen AI correctly to apply the right rules and regulations
    • Looping in legal and compliance teams
  • Standardizing a risk management framework
  • Understanding human responsibilities with new gen AI models
    • Putting adequate policies and procedures in place
  • Setting risk appetite at the enterprise level
  • Setting a comprehensive risk management framework for data review and validation

10:20 – 10:50

Morning refreshment break and networking

Risk & Compliance 

10:50-11:25

MODEL RISK
Using AI to enhance model risk management and validation

  • Validating new models that come in with LLMs
  • Generating risk models through AI
  • Integrating gen AI into traditional model risk
  • Reviewing considerations of integrating model risk frameworks
  • Identifying new approaches to model risk culture
  • Standardizing best practices to understand risks for models
  • Taking advantage of latest technology without increasing model risk
    • Understanding risks if this goes wrong
  • Putting controls in place to avoid proliferation of AI models
  • Putting gen AI through a model approval process

Chandrakant Maheshwari,
Lead Model Validator,
New York Community Bank

Scaling AI

Day 1 Scaling AI Moderator: Senior Executive, Validmind

10:50-11:25

GENERATIVE AI
Leveraging gen AI and reviewing use cases across teams

  • Managing expectation from gen AI
  • Understanding benefits and risks brough from gen AI
  • Achieving high standard of low false positives
  • Reducing humans in the loop
  • Revising current model risk framework to cover gen AI
  • Reviewing where bots can be implemented across products
  • Showcasing how detailed gen AI models can be
  • Implementing AI to help reduce work pressure
  • Articulating and identifying a use case

11:25-12:00

AI MODELS
Is AI defined as a tool or a model ?

  • Defining AI when it is not viewed as a model framework
  • Reviewing how AI is defined as a tool and not a model
  • Controlling the spread internally of AI models without increasing model risk
  • approval process
  • Innovating internally whilst staying compliant

Shawn Tumanov,
Data, Model and AI Governance Executive,
GEICO

11:25-12:00

VALIDATING GENERATIVE AI
Including conceptual soundness and outcome analysis

  • Session details to come

Agus Sudjianto,
Former EVP, Head of Corporate Model Risk,
Wells Fargo

12:00-12:35

EXPLAINABILITY & INTERPRITABILITY
Tracing inputs to outputs to justify outcomes from AI models

  • Increased complexity of distinguishing outputs derived from inputs
    • Need to understand how models work for regulatory reasons
  • Understanding how the models work to reduce bias answers
  • Having transparency with AI to understand certain outputs
  • Applying principles of explainability to generative AI
  • Leveraging big data sets and understanding outcome
  • Keeping human in the loop where relevant to explain outcomes
  • Validating generative AI content
  • Identifying when gen AI has hallucinated results

12:00-12:35

AI INVESTMENT
Where is best to allocate AI investment?

  • Diversifying spend with different AI models
  • Reviewing the long-term value of implementing AI
  • Balancing how much to invest into AI and where
  • Optimizing allocation of investment across teams with decentralization
    • Not just picking the most attractive investment
  • Picking the right use case to invest in when there is no visibility across all products
    • Uncertainty of potential impact that could be caused

Abhinav Prasad,
Head of Data Science & AI Strategy Planning, Securities Services & Digital,
BNY Mellon

12:35-1:45

Lunch break and networking roundtables
During lunch, meet our partners, presenters, peers, or join a roundtable to discuss topics with like-minded professionals. Check the roundtables organized by CeFPro below:

Build Vs buy, what is more effective

Sabeena Ahmend Liconte,
Head of Legal & Chief Compliance Officer,
ICBC Standard Bank Group

Defining where AI sits within the enterprise

Chris Smigielski,
Director of Model Risk Management,
Arvest Bank

Should financial institutions fight AI fraud with AI?

Sudharshan Narva,
Director, Data Analytics Internal Audit,
TIAA

Build Vs buy, what is more effective

Charles Shen,
Managing Director, Head of Model Risk Management,
Societe Generale

1:45-2:30

DEVELOPING AND TRAINING AI – PANEL DISCUSSION
Developing and training AI in relevant areas across the business to avoid mistakes

  • Developing and designing AI internally
  • Eliminating time challenges with around summarizing documents
  • Training AI correctly to get full benefits and avoid incorrect results
  • Understanding the best areas to leverage AI within institution
  • Ensuring AI tool is trained well enough to give results
  • Having relevant expertise to review and challenge documents produced

Satya Vandrangi,
Director of Product Management,
JP Morgan tbc

Ankur Goel,
SVP – Head of Consumer and Fraud Modeling,
PNC

Petr Chovanec,
Director,
UBS

Sachin Malhotra,
Managing Director, Senior Portfolio Manager – Chief Investment Office,
Bank of America

1:45-2:30

TRANSFORMATION – PANEL DISCUSSION
Transforming and adapting AI internally whilst balancing the risk and reward

  • Adapting new technology and staying ahead of potential risks
  • Balancing risk and reward for the adoption of AI
  • Accelerating AI development internally
  • Managing data efficiently
  • Combatting multiple teams approval to implement

Chris Smigielski,
Director of Model Risk Management,
Arvest Bank

Deniz Tudor,
Head of Modeling,
Bread Financial

Ted Pine,
Sr Business Development Manager, Insure AI,
Munich Re

2:30-3:05

AML
Leveraging AI to protecting against bad actors with increased level of threat

  • Leveraging AI to protect against bad actors
  • Matching growth strategy with bad actors to combat fraud attacks whilst staying compliant
    • Gaining relevant expertise to combat
  • Protecting institutions whilst delivering solutions to customers
  • Training employees to identify phishing emails done through AI
  • Putting in additional controls to protect data & employees
  • Reviewing how current AML models are utilizing AI
    • Understanding benefits, challenges and next steps
  • Identifying false positive ratios and steps moving forward
  • Decreasing checks when potential threats are identified

2:30-3:05

THIRD AND FOURTH PARTY
Having controls in place when 3rd party AI solutions are integrated

  • Understanding new technology being brought in
    • Accessing changes and data being used in the models to govern
  • Gaining control over changes to models made from vendors with less visibility
  • Mitigating potential risks when integrating vendor models internally
  • Comparing 3rd party models Vs internally developed
  • Governing and validating third party developed cases

3:05-3:40

BAD ACTORS – USE CASE
Identifying attacks as they become more sophisticated with AI

  • Tightening controls internally
  • Identifying identity theft accounts at opening or customer onboarding
  • Identifying deepfakes being used to make fake applications
  • Reviewing AI created checks from fraudsters
  • Authenticating AI created content for fraudulent activity
  • Evaluating if gen AI can manage fraud portfolio allocation
  • Detecting fraud patterns in payment transaction activity

Sudharshan Narva,
Director, Data Analytics Internal Audit,
TIAA

3:05-3:40

MARKETING – USE CASE
Using AI to personalize marketing outreach to customers

  • Setting guardrails to ensure ethical use
  • Exploring where this could be used efficiently within the institution
  • Increasing personalization in marketing through the use of AI
  • Defining red lines when onboarding different solutions
  • Leveraging AI for content
  • Going to market with new underwriting tools and online capabilities

Jonas Ng,
Chief Operating Officer,
Laurel Road

3:40 - 4:10

Afternoon refreshment break and networking

4:10 - 4:55

CULTURE/TALENT – PANEL DISCUSSION
AI Literacy: Investing in knowledge and skills for an AI workforce

  • Automating tasks but needing review from employees
  • Reviewing future implications to talent as AI gets adopted on a wide scale
  • Preserving knowledge as AI is adopted more
  • Defining new job contents with gen AI
  • Reviewing new skill sets the industry will require
  • Changing the existing workforce to adopt new technology.
  • Focusing on upskilling employees to optimally use AI
  • Finding appropriate skillsets and practical experience when hiring new talent
  • Changing training programs to make employees aware of AI risks
  • Partnering with AI vendors to train and upskill employees on AI LLM
  • Retaining talent when AI knowledge is in demand

Ramesh Sethi,
VP, Digital Solutions Delivery,
Prudential Financial

Charles Shen,
Managing Director, Head of Model Risk Management,
Societe Generale

4:55 - 5:30

DATA
Inputting and sourcing relevant data into AI models to ensure accurate results out

  • Reviewing development and evolution of AI in this area
  • Implementing good data into models to get accurate answers out
  • Giving chatbots access to data that doesn’t have customer data in
  • Setting clear controls and definitions on how access to data works
  • Getting data across disciplines to feed into AI
  • Putting data into the system in a structured way
  • Integrating AI/ML to source data at a lower cost
  • Pulling relevant data more efficiently with AI/ML across the enterprise
  • Ensuring data is meeting internal and external privacy regulations
  • Having comprehensive and accurate unstructured data for gen AI models

Shravan Bharathulwar,
Director,
RBC

5:30- 5:40

Chair’s closing remarks

5:40

End of day one and networking drinks reception

8:00 – 8:50

Registration and breakfast

8:50 – 9:00

Chair’s opening remarks

9:00 - 9:35

THE BOARD
Building a business case to get AI proposal approved

  • Defining criteria of writing a business case for a compelling business case
  • Demonstrating why AI is the strongest solution
  • Showing the ROI
  • Identifying the benefits of a use case
  • Identifying where to improve within the organization to gain benefits of AI
  • Gaining board buy in to allow internal adoption
  • Setting risk appetite around AI at the board level

Yury Blyakhman,
Chief Data Officer, Managing Director,
JP Morgan

9:35 - 10:20

IMPLEMENTATION – PANEL DISCUSSION
Implementing new technology into the existing enterprise

  • Implementing AI whilst staying compliant with regulations
  • Protecting customer data when using open AI
  • Implementing AI without causing customer impact
  • Articulating the business case and showing AI to be the more effective than current tools
  • Having an accurate inventory to help with implementation
  • Having educated talent to lead an implementation program
  • Apply new tools to existing old processes
  • Having correct talent in place to explore new tools
  • Validating information through prompt engineering
  • Cleaning up data to comply with new systems

Jennifer Courant,
Chief Data Officer,
DWS Group

Robbi E Armstrong,
Director AI Products & Strategy,
KeyBank

Steve Dunn,
Head of Innovation & AI Incubation,
SMBC

10:20 – 10:50

Morning refreshment break and networking

Risk & Compliance

10:50 - 11:35

OPERATIONALIZING – PANEL DISCUSSION
Testing and deploying AI within the institution

  • Operationalizing with different stakeholders and regulations involved
  • Gathering all relevant information before implementing
  • Creating fundamental education about AI
  • Setting governance boundaries around AI
    • Ensuring internal information is not shared externally
  • Exploring benefits with AI without crossing boundaries internally
    • Finding what management are comfortable with
  • Converting the promise of AI into actions and dollars saved
  • Measuring, monitoring and engaging with AI as it is deployed
  • Ensuring investments are yielding positive outcomes

Venkat Vedam,
Head of Data Science Engineering,
Manulife Investment Management

Jaydip Mukhopadhyay,
Vice President: Global Head – Model Risk, AI/Gen AI-Internal Audit Group,
American Express

Ariye Shater,
Managing Director,
Barclays

Scaling AI

10:50 - 11:35

ORGANIZATIONAL – PANEL DISCUSSION
Determining strategic AI deployment

  • Identifying a team that will run AI strategy
  • Designing the organisation to incorporate current AI features and planning ahead
  • Reviewing if a centralized or decentralized team should be put in place
  • Mitigating risks brought when running a decentralized team
  • Distinguishing where strategy is driven from the bottom up
  • Showing AI strategy to senior management
  • Explaining to the organisation how AI can help
  • Collaborating between lines of defence to better share information
  • Shifting job descriptions with more automation internally

Robbi E Armstrong,
AI Products & Strategy Director,
KeyBank

Jonas Ng,
Chief Operating Officer,
Laurel Road

Scott Kinross,
Senior Vice President, Software Engineering Director AI/ML Automation,
PNC

11:35 - 12:10

DATA SECURITY
Managing privacy and security data issues with generative AI

  • Gaining optimal insights from generative AI models without compromising data
    • Developing an operating model to achieve this
    • Delaying adoption of generative AI
  • Assessing availability of suitable and accurate data
  • Knowing where data is at all times
  • Limiting risk exposure

Andrew Hoffman,
Vice President & Senior Counsel, Privacy, Data Protection & Cybersecurity,
Goldman Sachs

11:35 - 12:10

GOVERNANCE
Performing appropriate governance over the AI lifecycle

  • Reviewing governance provided after onboarding an AI/LLM provider
  • Understanding how testing is done to models for appropriate governance
  • Setting up an AI center of excellence
    • Taking a deeper dive into the risks, benefits and costs of the project
  • Developing a standard model governance process across teams
  • Developing controls around who develops applications
  • Automating and streamlining risk identification
  • Preparing for requests from regulators
  • Limiting use of gen AI internally to avoid risk exposure
  • Having readiness of methodologies without requirements in place

Stefano Pasquali,
Managing Director, Head of AI Investment AI Modeling and Research,
BlackRock

12:10 – 1:20

Lunch break and networking
During lunch, meet our partners, presenters, peers, or join a roundtable to discuss topics with like-minded professionals. Check the roundtables organized by CeFPro below:

Upskilling Vs hiring new talent with the right skill sets, what works best?

Chandrakant Maheshwari,
FVP, Lead Model Validator,
Flagstar Bank

Showing the value gain from AI integrated internally

Audie Wang,
Executive, Director, Head of Quantitative Analytics and Data Science, Americas Finance,
UBS

Defining where best to implement AI internally

Ted Pine,
Sr Business Development Manager, Insure AI,
Munich Re

Training AI to extract relevant data components out of commercial credit documents

Deniz Tudor,
Head of Modeling,
Bread Financial

1:20 - 1:55

AI RISKS
Appropriately identifying risks around AI and setting risk appetite

  • Understanding the overall risk assessment of an AI system
  • Appropriately governing AI through protection and risk analysis
  • Identifying unique characteristics of AI as models evolve
  • Establishing a risk management model around AI
    • Safeguarding what staff can and cannot use
  • Creating enterprise governance policy around the use of AI
    • Understanding when tools like chatGPT can be used
  • Reviewing impact to financial decision making from inconsistent answers with AI
  • Having a risk assessment in place to manage AI

Benjamin Dynkin,
Executive Director, Cybersecurity & Responsible AI Leader,
Wells Fargo

1:20 - 1:55

USAGE OF AI AND VALUE GAIN
Showing the value gain from AI integrated internally

  • Deriving value brought from AI
  • Finding and defining the first use case
  • Putting up guardrails around the risks brough with AI
  • Defining best practices of AI policy
  • Defining criteria for success of using AI

Audie Wang,
Executive, Director, Head of Quantitative Analytics and Data Science, Americas Finance,
UBS

1:55 - 2:30

RISK ASSESSMENT
Creating an AI application for automated risk assessment in AI initiatives

  • Demonstrating the real-time creation of an AI application for risk assessment
  • Exploring the benefits of automated risk assessment in reducing review times and increasing reliability
  • Customizing assessments to a company’s policies and procedures
  • Ensuring compliance and legal adherence with AI-driven assessments
  • Accelerating model risk management with automatic feedback for requestors

Pablo Curello,
Global Head of AI & Innovation Solution Design and Delivery,
BNY Mellon

1:55 - 2:30

LARGE LANGUAGE MODELS
Adapting traditional model risk frameworks to align with large language models

  • Deploying chatGPT in a corporate environment
    • Testing Microsoft co-pilot
  • Augmenting capabilities of bankers with large language models
  • Leveraging large language models to summarise earnings calls
    • Gaining a holistic view of the industry
  • Maturing risk management framework around gen AI
    • Validating and backtesting models
  • Customizing products for clients

2:30 – 3:00

Afternoon refreshment break and networking

3:00 - 3:45

ETHICAL & RESPONSIBLE AI – PANEL DISCUSSION
Implementing guardrails to ensure the responsible and ethical use of AI

  • Reviewing the ethical considerations around AI
  • Monitoring models that are prone to hallucination
  • Defining metrics used in continuous monitoring
  • Assigning inherent risk rating and residual risk rating to gen AI
  • Reviewing methodologies used to ensure responsible use of AI
  • Training and testing models to remove bias and ensure ethical use
  • Avoiding reputational risk and direct impact if institutions are not responsible
  • Reviewing increased risk as responsible AI can be prompted in unusual ways

Richa Singh,
Vice President, Data and AI – Investment Banking,
Goldman Sachs

Ruchi Sharma,
Executive Director,
UBS

Dhagash Mehta,
Head of Applied AI Research,
BlackRock

Roger Parsley,
MD Global Head of Technology and Cybersecurity Risk Governance,
State Street

3:45 - 4:30

CONTROLS
Setting appropriate controls around AI and performing due diligence

  • Ensuring safety of using AI and avoiding human error
  • Training AI correctly
    • Putting in the right data sets
  • Ensuring personal data is not exposed
  • Updating contractual clauses to control AI and protect from risk
  • Having appropriate policies and procedures in place for AI tools
  • Conducting due diligence when onboarding an AI model
  • Getting relevant information when onboarding third party AI tools

4:30 – 4:40

Chair’s closing remarks

4:40

End of congress