Banker's Digest
2026.01
Breaking past proof-of-concept fatigue for AI investments

What proportion of annual capital expenditure and operating budgets should banks commit to AI and data infrastructure over the long term? For a reference point, JPMorgan Chase invests approximately USD 2 billion annually in AI, accounting for just over 10% of its annual technology budget, and about 1% of its annual revenue. Taiwanese banks must decide how much funding to reserve for AI and data infrastructure within their capital expenditure structure, and for how long – a choice which will determine whether they remain merely followers of the current AI wave, or position themselves to shape industry going forward. Recently, the financial sector has started complaining of “PoC fatigue.” International consulting firms have repeatedly observed banks starting with sky-high expectations for generative AI, yet relatively few Proof of Concept (PoC) projects progress from validation to full deployment, causing substantial resources to become trapped in the experimental phase. Senior executives privately acknowledge that while numerous PoCs exist, only a limited number deliver clearly measurable financial returns. PoCs are repeated cycle after cycle, presentations become increasingly polished, and media exposure grows, yet financial results show little improvement. At this stage, banks need not another showcase project, but investment logic to pull AI back from experimentation into core business decision-making. In practice, many bank PoCs remain confined to laboratory settings and presentation environments. Connections to core systems are often partial or temporary, process controls and second-line oversight are simplified, and performance indicators focus mainly on technical metrics or subjective feedback. While this approach is suitable for concept validation, it provides little support for subsequent capital allocation decisions. To move AI beyond experimentation, projects must be designed from the outset as systems intended for production, rather than as demonstrations to be evaluated later. Even when deployed on a limited scale, they should be assessed using production standards. Performance indicators must shift to hard metrics such as reduction in case processing time, decreased manual review ratios, changes in alert hit rates and false positives, and reduction in operational risk incidents, rather than remaining at positive user feedback, or marginal model accuracy improvements. At its core, this approach treats each AI initiative as a genuine investment project. Clear conditions for scaling up or termination should be defined at approval. If data quality, risk controls, or benefits fall short of expectations, losses should be contained early, and resources reallocated to more promising targets; if the outcome meets expectations, the project can be expanded within a controlled risk framework. This design difference goes beyond technical methodology and leads directly to critical questions: which items should be counted as costs, and what levels of return should be expected? If discussions focus only on model licensing fees or cloud computing expenses, the true costs are almost certain to be underestimated. The total cost of ownership of AI for banks extends well beyond individual models, to include technology and governance frameworks that can withstand regulatory scrutiny over the long term. Broadly speaking, AI costs in banking can be divided into at least four layers. The first is computing and core platforms, including computing resources, storage, networks, and redundancy required to train and run the AI; whether built in-house or consumed through the cloud, these ongoing structural expenditures are difficult to scale back. The second layer is data and integration. Data inventory, cleansing, field standardization, access controls, and integration with core and peripheral systems often become the largest budget sinks. The third layer is security and risk management – encompassing cybersecurity protection, masking of personal and sensitive data, vulnerability scanning and remediation, incident reporting and response, third party risk management, and stress testing and resilience exercises for external-facing services. The fourth and most frequently overlooked layer, meanwhile, is governance and compliance – including model validation, bias and fairness testing, continuous monitoring, version management, comprehensive documentation and audit trails, model change reviews, and the time and manpower required to coordinate with internal audit, internal control, and multiple lines of defense. Looking further, the true differentiator lies not in how impressive a single AI project appears, but in whether a bank is willing to invest in a comprehensive platform to connect fragmented PoCs into a shared, synergistic capability base. Such a platform should include a data foundation centered on a data hub, standardized MLOps and model risk governance mechanisms, and control interfaces and agent orchestration layers to support generative AI. Such infrastructure should be regarded as a shared capital investment across projects, rather than being fully assigned within the ROI of the first use case. The cost may appear high from the perspective of a single project, but once established, the platform can be reused by dozens of subsequent initiatives, amortized over multiple years. Talent and organizational costs are equally critical. Bringing models into production requires not only expertise in data engineering, data science, cybersecurity, and model risk, but also an AI platform team that spans IT, risk management, and business units. This team will be responsible for the data platform and model governance, providing shared services across business lines. If AI is treated as a series of isolated, one-off projects, organizational knowledge must be repeatedly rebuilt whenever key personnel leave or vendors change, making knowledge accumulation difficult, and causing repeatedly incurred costs. With a stable AI platform and governance framework, talent can accumulate experience within a consistent architecture, the impact of turnover becomes more manageable, and past investments are more likely to crystallize into long-term institutional assets. The most common question when discussing AI returns is how much labor it can save. While valid, relying solely on labor savings to evaluate AI investment is narrow, and can lead to poor decisions. For most banks, the reality is not immediate staff reductions, but avoiding additional hiring. The key issue is whether released time and capacity can be redeployed toward higher-value business development and risk management. As a result, evaluating returns requires a broader perspective across three dimensions: efficiency, risk, and revenue. For the first, intelligent customer service, internal copilots, and process automation can directly reduce inbound volumes, shorten average handling times, and lower rework rates. These improvements ultimately reduce per-transaction costs and overall operating expenses. The benefits of AI can be clearly linked to management accounting and financial statements through activity-based costing, breaking processes into discrete steps, and estimating processing and error costs for each step. The second dimension is risk management and capital efficiency. If AI helps banks more precisely distinguish between lower and higher risk customers and cases, and detect repayment anomalies or suspicious transactions earlier, they can extend more high-quality credit without increasing overall risk, or reduce non-performing loans and credit losses while maintaining lending volumes. These improvements are reflected in asset quality, loan profitability, and overall returns, enabling each unit of capital to support a larger volume of healthy business. The third dimension is revenue growth and customer experience: the most difficult dimension to attribute precisely, yet the one which offers the greatest potential for structural differentiation. By integrating transaction, behavioral, and interaction data, AI can help relationship managers in corporate banking better understand client cash flows and supply chain relationships, and assist retail and wealth management teams in aligning offerings with customers’ life stages and risk preferences, improving cross-sales and product penetration. In digital channels, customer segmentation and real time content adjustment can increase conversion rates for advertising and marketing campaigns. To truly incorporate revenue and experience into AI investment logic, banks must establish experimental design and performance tracking capabilities, translating metrics such as incremental sales, reduced attrition, and digital interaction driven revenue into concrete KPIs linked to long-term customer value and share of wallet. In practice, banks can further apply a nine cell matrix based on complexity and benefit as a common evaluation framework for all PoCs. Regardless of size or technology maturity, each proposed PoC should first be positioned within this matrix, providing a basis for prioritization when multiple projects compete for limited budgets and resources. Projects with low-to-medium complexity and high expected returns naturally receive higher priority, as they are more likely to deliver tangible results at reasonable cost. Projects with high complexity and high returns are typically positioned as strategic investments, requiring phased implementation and alignment with platform development. Proposals that fall into the high complexity, low return category should be rigorously scrutinized during investment review; unless their design can be adjusted to clearly improve benefits or reduce complexity, they should not proceed lightly. The ability to assess priorities and budgets with strategic perspective will be one of the most critical considerations in the AI era for bank decision makers. The author is an Assistant Research Fellow at the Institute of Financial Research of TABF.



