A recent report by the New Zealand Institute of Chartered Accountants (NZICA) and the Institute of Chartered Accountants of Scotland (ICAS) to the International Accounting Standards Board (IASB) has raised significant issues regarding the matter of information overload (Joint Oversight Group 2011).[1] Excruciatingly detailed financial reports have long been intuitively suspected as the reason why large, complex business enterprises fail to adequately inform users of the risks attendant to those entities’ operations. This paper’s formal recommendation to employ the Laffer Curve as a model to explain the effects of information overload on financial reporting may spur revitalized attention to the simply-stated question: can there be too much of a good thing?

In a request made to the IASB, a joint working party of these two countries’ professional accounting organizations asserted that the increasing size of annual reports had become a concern to the financial reporting community, with many key messages about a company’s performance becoming drowned in detail. A 30 percent reduction in the size of such reports was proposed as a first step in dealing with this perceived problem.

Among other findings, the report’s authors claimed that in the UK there had been a 44 percent increase in the size of listed companies’ financial statements between 2005 and 2010, “hindering, not helping communication,” and that by losing “excess baggage and focus[ing] on what’s important,” financial statements could become more helpful to users.[2] The work was carried out by the Joint Oversight Group after a request from former International Accounting Standards Board Chair Sir David Tweedie to help reduce the volume of disclosure requirements mandated by International Financial Reporting Standards.

The chief benefit cited would be reduced costs for the printing of financial reports. However, inasmuch as reports are increasingly being communicated electronically, the issue of such costs may be of lesser relevance in future years. A more critical issue is whether excessive content paradoxically results in poorer communication and, paradoxically, in poorer decisions.

Information overload has often been cited as a risk associated with the seemingly insatiable appetite of analysts and investors for an ever-increasing volume of information in financial statements. Considering the importance of transparency and usefulness in financial reporting, perhaps now is the time for the accounting profession and professorate to seriously examine this concern. One avenue for such investigation might make use of the “Laffer Curve,” long used to explain why higher tax rates have the dysfunctional effect of reducing tax revenues.


Concern over “information overload” is not new and is commonly raised, at least in the abstract, whenever new requirements are proposed for financial statement disclosures and for accompanying narratives, such as the “Management Discussion and Analysis” required to be included in annual reports filed under U.S. securities laws (and corresponding unaudited disclosures and discussions found in annual reports and filings under various nations’ securities regulations). While the concept of information overload may be intuitively grasped, the lack of precision in discussing this perceived problem, coupled with the presumption that so-called efficient markets process all information available and function best with maximum input, has generally caused this concern to seemingly be dismissed by U.S. standard-setters and regulators such as the Financial Accounting Standards Board (FASB)[3], the Securities and Exchange Commission (SEC), and others setting expanded disclosure rules.

Notably absent from the public records of deliberations associated with recent standards-setting initiatives is any evidence that consideration was given to whether the incremental disclosures being proposed would actually improve the reader’s comprehension and provide insightful information. Inevitably these changes to accounting standards are adopted absent any serious attempt to actually measure the effects of information overload in the context of decisions made by users that rely upon the information provided in the financial statements. To do so, it would first be necessary to hypothesize about a likely inverse relationship between information quantity and the ability of the decision maker to assimilate and evaluate that information when it necessarily would require an incremental investment in time, effort, and attention on their part to do so.

As international standards continue to gain traction worldwide, and as high-impact standards convergence projects are being undertaken by various domestic standards setters in partnership with the IASB, now may be an opportune time for academic accounting researchers to tackle this issue. A hypothesis analogizing from the Laffer Curve to the relationship between changes in the quantity of mandated financial disclosures and the usefulness of financial reports might be a way to gain some traction for this research.

The Laffer Curve is an intuitive representation of taxable income elasticity famously first given graphic expression by economist Arthur Laffer in 1974. It powerfully suggests that there indeed can be “too much of a good thing.” Of particular note was Dr. Laffer’s observation that government tax receipts would initially grow as income tax rates were increased, albeit with a diminution in marginal returns, until some inflection point was reached, after which total tax receipts would actually decline, at an increasing rate.[4] While the idea of diminishing marginal returns has long been cited by economists, there may well be an analogue to it applicable to the realm of information availability, and in particular in the area of mandated financial statement disclosures. In other words, could there be net negative marginal returns in terms of utility once the quantity of informative disclosures exceeds some critical value?

There is some intuitive appeal to analogizing the Laffer Curve phenomenon to the information overload problem. Dr. Laffer made the simple observation that at a zero income tax rate, tax receipts by the government would be zero, and that a 100 percent confiscatory rate would also result in zero tax receipts, since the incentive to work would have vanished (and/or underreporting would have become endemic). Likewise, if decision-makers have zero information about companies they might wish to invest in or lend to, they would lack the ability to make informed judgments. Completion of the analogy necessarily raises the question, is there some level of information overload that would have the same effect; namely, to cause these decision makers to have zero useful information upon which to base their decisions?

Just as with respect to zero or 100 percent income tax rates, such an extreme result is not likely to occur, since some nuggets of useful information presumably could be drawn even from the most bloated and trivia-laden financial reports – unless investors were so put off as to reject making any effort to understand them. Nonetheless, as a practical matter the value of the financial reports could asymptotically approach zero, thus roughly conforming to the pattern of the Laffer Curve. Because, unlike tax receipts and tax rates, information quantity and usefulness cannot be measured directly, the challenge is to find observable proxies for these variables, and then to operationalize a model that can be empirically tested – or at least more fully described in conceptual terms.[5]


There can be no doubt that financial statement complexity and the absolute quantity of informative disclosures (commonly called footnotes) have grown geometrically over the past several decades, largely due to the imposition of new or revised financial reporting standards, which often mandate increased disclosures. This observation is not meant to suggest that substantial and necessary improvements have not been made in financial reporting over the years. Increased disclosures have been required in large part as a response to ever-more complex business transactions and structures, such as the now-widespread use of derivative financial instruments. However, many commentators have questioned the ability or willingness of users of financial information, such as investors, to read, absorb, comprehend, and assess the significance of the incremental information included within financial statements prepared in accordance with generally accepted accounting principles (GAAP).

In non-financial reporting contexts, “cognitive over-stimulation” has been cited, and information overload has even been hypothesized as a cause of mental illness (Toffler 1970). The existence of a critical “inflection point” of information quantity has been theorized, beyond which marginal contributions of further informational inputs will be insignificant or even negative (Davenport and Beck 2001). Bray (2008) discusses “information pollution” and human cognitive load limitations, and states that information system solutions should ideally replace, not add to, existing information resources: he urges that human attention spans be treated as scarce resources.

It has been widely reported that the current generation suffers from diminished attention spans,[6] variously attributed to expanded Internet usage,[7] the popularity of social networking and micro-blogging, and the sheer exponential growth in sensory stimuli provided by our modern technology-empowered culture. If true, this too would suggest that expanded financial disclosures might be harmful to those afflicted by information overload because one of the effects of being exposed to such a deluge of information is a diminished capacity to differentiate important information from information of little, if any importance. This can encourage superficial attention to or even wholesale dismissal of valuable information being offered to them.[8]

One expression of this concern is the periodically reappearing demand for a streamlined financial reporting framework, or at least a slimmed-down set of disclosure requirements, for so-called small and medium-sized entities (SMEs) – a term sometimes used to denote non-publicly accountable entities, irrespective of their size. The author has historically opposed having two (or more) co-existing sets of U.S. GAAP, for two main reasons. First, the use of a simplified set of reporting rules could imply that those using “little GAAP” were reporting under a second-class accounting regime – an implication that, in theory, could have negative consequences for the reporting entity, including a higher cost of capital.

Second, because of a firm belief that financial reporting should be driven by the character of reporting entities’ economic transactions and events, and not by their size or an arbitrary criterion such as whether or not the reporting entity is publicly accountable – and thus that even a small business might need to include extensive disclosures if it engages in complicated transactions, such as being a counterparty to derivatives, or is subject to complex external influences, such as the need to provide for environmental remediation efforts at uncertain future dates.

Various schemes for a “little GAAP,” or for discrete and less-burdensome reporting requirements – for either SMEs or for private companies – that have been proposed over the past thirty-plus years have largely come to naught. The major exceptions to date: UK’s Financial Reporting Standard for Smaller Entities (FRSSE) and IASB’s International Financial Reporting Standard (IFRS) for SMEs. This reveals the ambivalence with which such proposals have been viewed, as well as the very real difficulties encountered in defining meaningful threshold criteria for the applicability of differential requirements.[9] U.S. standard setters are again working on a similar project, and other national standard setting bodies, such as in Canada, have pursued equivalent efforts.[10]

Beyond the question of the advisability of mandating fewer disclosures for non-public companies,[11] however, lies the more universally relevant issue of whether or not extensive disclosures might, per se, be dysfunctional. Could it be that the quantum of information provided has the effect of “hiding in plain sight” the really important facts and insights that investors need, and of causing them to simply dismiss vital information out of hand, even if placed in front of them? (Paredes 2003)

Doubtless, the topic of information overload is not a new one, but the substantial body of research and theoretical writings on this subject, accumulated over the past half-century, seems to have not been heavily consulted by organizations (both the SEC and private sector standard-setters) that impose disclosure requirements with respect to financial reports.[12] Renewed and expanded degree of attention has been given to this matter, for several obvious reasons. First, new standards continue to be mandated and others continue to be proposed, and almost inevitably these involve expanded disclosure requirements. Second, there continue to be a large number of alleged financial reporting frauds, with a common theme being that key transactions were concealed or misrepresented in the financial statements. (Most recently, as noted below, the failed mortgage lenders FNMA and FHLMC have been targeted for such allegations.) And finally, the U.S. appears on the verge, for the first time, of mandating a simplified reporting regime for smaller companies, implicitly acknowledging that “full GAAP” may have become too complex not merely for users/decision-makers, but also preparers, auditors, and others.


The issue raised above can be examined in the context of the U.S. SEC’s filing of civil fraud charges on December 17, 2011, against six former executives of the Federal National Mortgage Association (FNMA, commonly called Fannie Mae) and Federal Home Loan Mortgage Corporation (FHLMC, known as Freddie Mac), the formerly implicitly Government-guaranteed, now-Government-owned, mortgage-lending intermediaries. The SEC charged the defendants with civil fraud regarding the alleged failure to adequately communicate the risks associated with the mortgage twins’ holdings of mortgages and mortgage derivatives that were sub-prime or “Alt-A” in character.[13] These investments began to default in record numbers during the financial crisis that began in 2008, causing huge losses and the subsequent bail-out and takeover of these enterprises, costing U.S. taxpayers over $150 billion to date. More broadly, the question of information overload could be raised in connection with many other failed or struggling companies’ financial reports.[14]

With reference to the FNMA/FHLMC allegations, however, there may be a basis for concern that information overload was a culprit equal in importance to, or more important than, any actual financial reporting infractions committed by management. An actual review of the 2007 FNMA annual report by one of the authors reveals almost 300 pages of financial statements, footnotes, and, especially, narratives (in the mandatory section referred to as Management Discussion and Analysis, or “MD&A”) explaining management’s goals, objectives, actions, plans, achievements and perceptions.[15] The latter included very detailed analyses of the compositions of FNMA’s mortgage and derivatives portfolios, including explicit caveats regarding risks and the fact that defaults had already begun to climb. The reports even incorporated disclosures of the post-balance sheet date decisions by ratings agencies to re-review the ratings assigned to pools of FNMA loans (which later led to downgrades that contributed to the financial panic of 2008-10).

It thus appears to the authors, based on admittedly partial evidence, that the assertion that FNMA management intentionally misled investors by omitting important information regarding the riskiness of the activities in which it engaged might not be accurate, even if most investors failed to fully appreciate the gravity of the impending crash before it became an undeniable reality, and as their investments in FNMA became nearly worthless. Copious amounts of information had been provided by the reporting entity, and of course much additional information was available in the popular press and from financial pundits.[16] There may have been widespread suspension of disbelief by investors and regulators, but the signs were largely there, to be read and used by those wishing to do so. Under-disclosure, therefore, was not seemingly the problem.

But what about over-disclosure? Could it be that the sheer amount of information (e.g., in the FNMA Forms 10-K) itself actually caused or contributed to a lower level of awareness among investors, regulators, and others? In other words, can more extensive disclosure not only cause the diminishing marginal returns syndrome, but actually create what amounts to an actual net loss of comprehension of the information provided? Historically, of course, “more” disclosure has been seen as better than “less” disclosure, although always at a cost – and the cost/benefit equation has been cited most typically by those opposing any proposed new financial statement disclosures. Thus, since the advent of the FASB’s conceptual framework (in the late 1970s), FASB has had to justify all new standards, including disclosures, as being likely to offer more benefits than costs of production, although these have never been quantified (and may not even be quantifiable). But it appears that little research has been done on the question of whether an absolute decline in information value might flow from expanded disclosures.


Information theory has long recognized that compression of data causes information loss. In other words, assuming a starting point or base of a given quantum of information, any reduction of information (think of summary data being substituted for complete financial statements, for example) impairs the information content of the communication. This phenomenon (besides being somewhat intuitive) was most famously, and formally, addressed two generations ago by Claude Shannon, who developed the concept of information entropy, which found its clearest expression in the realm of communications engineering (Shannon 1948).[17] Such models did not contemplate the opposite effect: that more information could add noise that would actually serve to obscure important content. It did not, in other words, address the behavioral aspects that affect human information processing, as contrasted with mechanical concerns affecting communication channels (bandwidth, etc.).

More recently, it has been recognized that there is a multiplicity of attributes or dimensions giving decision-relevant information its inherent value. Notably, the FASB’s conceptual framework project cited a hierarchy of attributes defining information value, albeit without rigorously describing how these specific attributes interacted or traded-off (FASB 1980). Various academic researchers, however, have attempted to apply assorted techniques, including non-metric scaling, in the effort to elicit from information users their individual preferences for varying quantities of the multiple value dimensions, including developing trade-offs for such attributes as informational timeliness and reliability (Epstein and King 1982). Although such efforts could have served to shed light on such questions as how information users might view the more timely availability of condensed or summarized information, when compared to the tardier delivery of more detailed information, they would not have addressed the additional issue of whether incremental information, beyond some critical level of detail or quantity, could actually become dysfunctional or even destructive. The next important question is: can too much information be a bad thing?


This question would seem to be an obvious candidate for cross-disciplinary academic research by accounting and behavioral sciences faculty. Experimental subjects (preferably business managers, or at least graduate students, rather than the usual college sophomores) could be divided into control and experimental groups and given financial statements with greater or lesser degrees of details in the footnotes, and/or with more or less expansive discussions in the MD&As, and then asked to make associated financial decisions. The wisdom of the ensuing decisions would then have to be assessed (alternatively, the amount of time devoted by the subjects to reading the materials could be measured to determine whether the inclusion of more information actually reduces the amount of time that the subject spends digesting it). Although all such experimental designs have important limitations (using “play money,” for instance), it should be possible to at least begin the process of mapping users’ utility functions, which could ultimately serve to inform standards-setters regarding the amount, and manner, of financial statement disclosures to be mandated.

In fact, there is a substantial body of academic research on decision making, going back fifty years, which could and should be the foundation for exploration of the issue raised above: whether current financial statement disclosure requirements have reached the inflexion point of dysfunctionality. As early as the 1950s, pioneering organizational behaviorist Herbert Simon (1955) posited that decision-makers in fact have limited cognitive abilities to store, process, and interpret information, and thus that the vaunted “rational economic choice behavior” in fact is only “boundedly rational.” This implies that decision-makers cannot attend to all the information made available to them and cannot evaluate all their choices perfectly.[18] In other words, whereas purely rational decision-making would imply that more information is always to be preferred to less information, “boundedly rational” decision-making could be impeded by information overload and could be improved if information were judiciously limited.

Following Simon’s groundbreaking theoretical work, many others have sought to explain how capital allocation decisions (such as investing and lending decisions) are actually made. These studies and hypotheses are generally found in the literature of the behavioral sciences, where perhaps they have received too little attention from most non-academic accountants.[19] However, if one is careful to “read between the lines” of the FASB Conceptual Framework,[20] the qualitative characteristic usefulness can be seen to allude to a construct similar to what Simon discusses as part of his “satisficing” behavior hypothesis: given decision-makers’ “boundedly rational” behavior and limited cognitive abilities, to be truly useful accounting information should be limited in quantity and complexity.

Thus, the information processing limitations hypothesized by Simon and others directly validates the concept of information overload. Studies have shown that as decision makers are provided with more information, decision quality initially increases, but that once the information level reaches a certain point, decision quality decreases as yet additional information is provided (Stocks and Harrell 1995; Tuttle and Burton 1999).

More specifically, it appears that rather than ignoring parts of the information provided, decision-makers employ simplifying decision approaches in the face of a surfeit of information, such as using a so-called “lexicographic” strategy.[21] However, it appears that not enough research has been done to strongly support conclusions regarding how sub-optimal decisions are actually made when the decision maker is afflicted with information overload. What is clear, nevertheless, is that decision-makers are impacted by such circumstances, and the quality of decisions suffers as a consequence. There is no reason to assume users of financial statements are immune from this phenomenon.

These findings – only a small fraction of which have been cited here – are consistent with a Laffer Curve-type phenomenon. In the authors’ opinion, given the large quantity of information now mandated for disclosure (in the MD&A as well as in the financial statement footnotes, per se), it is crucial that meaningful efforts be devoted to research how, if at all, it can be determined what quantum of decision-relevant information would be most likely to optimize rational decision making by investors and other users of financial reports. The imposition of additional disclosure requirements, or the elimination of certain existing ones (or both), could be affected by the outcome of such investigations.

[1] See also Morunga and Bradbury 2010.

[2] As cited in July 21, 2011, news release issued by ICAS (http://www.icas.org.uk/site/cms/contentviewarticle.asp?article=7615).

[3] In many of its recent Accounting Standards Updates (ASUs), U.S. standard-setter FASB has, in the summary accompanying the document, characterized the update as containing “enhanced disclosures.” It can legitimately be argued that some portion of the incremental disclosures contained in these documents do, indeed, represent an enhancement of the information available to the reader. However, for FASB to assert that “more is necessarily better” may simply represent a biased statement that is unsupportable.

[4] In fact, the Laffer Curve (so named by economist and former Wall Street Journal columnist Jude Wanniski) was not a wholly original thought, and Dr. Laffer himself cites a 14th Century thinker, Ibn Khaldun, for this idea. However, as the Laffer Curve it became popularly associated with the economic and tax policies and proposals of the Reagan Administration. The parabolic curve, Dr. Laffer and others have acknowledged, was symbolic and not an actual attempt to rigorously define the mathematical relationship between total tax receipts and specific tax rates. This is somewhat reminiscent of the economic law of diminishing marginal returns, which holds that additional productive inputs result in less than proportionate increasing outputs (commonly described in terms of basic inputs of material, labor, and overhead to physical production situations). However, in the case of the Laffer Curve (or similar phenomena), there are not merely diminishing but still positive marginal returns, but actual net negative returns realized beyond the inflection point.

[5] As noted in the foregoing footnote, even the Laffer Curve has not been reduced to a mathematical model, and is thus conceptual in nature. However, many would argue that, in spite of this limitation, the Laffer Curve is no less useful for policy formulation.

[6] This is meant to imply a decreased willingness to give lengthy written materials, such as financial reports, the attention needed to fully digest their content, and not increased incidence of clinical conditions such as attention deficit hyperactivity disorder (ADHD), which is beyond the scope of this article and the expertise of the authors.

[7] Often cited is the tendency to assume that necessary information can be accessed in real time and on-demand via Google™, Wikipedia™, or other search engines, making ‘cover-to-cover’ review of documents seemingly unnecessary.

[8] The author’s predilection is to not have financial reporting bow to users’ disinclination to devote the necessary threshold level of attention to the information needed to make rational investment decisions. On the other hand, providers of financial information have to serve their customers, thus legitimizing the challenge to find ways to communicate all needed information in more concise, digestible and appealing forms, so that it will actually be read and used. In other words, the accounting profession must lead, and not merely succumb to popular culture-induced self-destructive habits such as superficiality.

[9] The UK’s Financial Reporting Standard for Smaller Entities was first promulgated in 1997 and is now in its fifth iteration, applying size criteria set forth in the UK Companies Act. IASB’s IFRS for SMEs, issued in 2009, does not actually impose a size test, but rather is optionally available to reporting entities that do not have public accountability, which generally means those not publicly-traded or anticipating public offerings of debt or equity, but also excludes entities that hold funds in a fiduciary capacity on behalf of the public such as commercial banks, investment banks, credit unions, insurance companies, broker/dealers, and mutual funds.

[10] US companies can elect to report under full IFRS as issued by the IASB or under the streamlined IFRS for SMEs, but few if any have done so, other than subsidiaries of foreign companies reporting under IFRS. The American Institute of Certified Public Accountants (AICPA) recognizes IFRS as issued by the IASB as an acceptable set of financial reporting standards, and auditors can opine on financial statements prepared in conformity with full IFRS or with IFRS for SMEs.

A “Blue-Ribbon Panel,” appointed by the Financial Accounting Foundation (FAF, the oversight body of the Financial Accounting Standards Board (FASB)), the American Institute of Certified Public Accountants (AICPA), and the National Association of State Boards of Accountancy (NASBA), studied this issue and issued recommendations that included a proposal to establish a new, autonomous standard setting board for U.S. private companies. The Panel rejected outright adoption of IFRS for SMEs, however due to the perception by its members that privately held businesses would find conversion from a U.S. GAAP-based system to one based on IFRS to be unnecessarily costly. A news release by FAF dated January 26, 2011, described the proposed changes, which would “focus on making exceptions and modifications to U.S. GAAP for private companies that better respond to the needs of the private company sector.” The FAF also discussed the establishment of a “differential framework – a set of decision criteria – to facilitate the standard setter’s ability to make appropriate, justifiable exceptions and modifications.” On October 4, 2011, FAF issued a Request for Comment, Plan to Establish the Private Company Standards Improvement Council which adopted many of the recommendations of the Blue Ribbon Panel. Notably, however, the “Council” that FAF proposes to form would be subject to the oversight of FASB, whereby FASB would have the right to veto decisions made by a super-majority of the Council. This structure falls short of the recommendation by the Blue Ribbon Panel and has been met with substantial and vitriolic criticism from its opponents. Comments on this document were due by January 14, 2012.

Canada replaced Canadian GAAP (which closely resembled U.S. GAAP) with full IFRS for publicly-held companies in 2011. As is the case in the U.S., Canada also rejected the adoption of IFRS for SMEs for its private companies. Instead, its standard-setting body, the Accounting Standards Board (AcSB), adopted its Accounting Standards for Private Enterprises, an abridged version of full Canadian GAAP, which private enterprises were permitted to adopt in lieu of IFRS in annual financial statements for fiscal years beginning on or after January 1, 2011.

[11] Few, if any, proposals call for differential recognition or measurement options for SMEs; by and large, it is a reduction in disclosures that have been proposed, although some, such as IFRS for SMEs, do mandate a narrower range of acceptable options for recognition and measurement compared to the full standards. Arguably, a narrowing of alternative elections to account for similar transactions or events should be considered equally desirable for financial statements of larger, publicly accountable enterprises since this would lead to enhanced comparability of their financial statements with those of other enterprises.

[12] See, inter alia: Driver, M. J. and Mock, T. J. 1977; Snowball, D. 1980; Casey, C. J., Jr. 1980; Malhotra, N. K. 1982; Malhotra, N. K. 1984; Keller, K. L. and Staelin, R. 1987; Chewning, E. G., Jr. and Harrell, A. M. 1990; Payne, J. W. et al. 1993; Bonner, S. E., 1994; Stocks, M. H. and Harrell A. 1995; Iselin, E. R. 1996; Simnett, R. 1996; Stocks, M. H. and Tuttle, B. 1998; Bettman, J. R., et al. 1998; Tuttle, B. and Burton, F. G. 1999; and Speier, C. et al. 1999.

[13] The terms “Alternative-A” and “Alt-A” are used to describe mortgage loans where the lending standards followed were not in conformity with the standards (and thus, followed alternative standards) prescribed for mortgages backed by Fannie Mae and Freddie Mac.

[14] This observation is not intended to provide an excuse to the managers of those entities that may have committed financial reporting fraud, or that obfuscated by both “hiding relevant information in plain sight” by accompanying it with long, mundane narratives that contain lengthy, jargon-laced exposition about minutiae of little or no concern to the reader; nor to excuse those who engage in outright misrepresentations or departures from GAAP.

[15] This is not to suggest that FNMA’s business model was not fraught with risk, and not to imply that there had not been a long period of underestimation of that riskiness by the investing public, in large part due to the implied Government guarantee of all FNMA debt. But the financial statements and notes, themselves, should not have misled investors. For example, the statement of financial position clearly reveals a 19:1 leverage ratio ($44 billion of equity supporting $882 billion of assets at year-end 2007), meaning that a mere 5% decrease in the value of FNMA’s assets would have wiped out all of its equity.

[16] Wall Street Journal editorials, for example, had been predicting FNMA’s and FHLMC’s ultimate demise and the costly Government bail-outs for over a decade before this came to pass.

[17] Shannon’s work was important to the optimization of communications channels based on data compression, and his model was analogous to that expressed by the second law of thermodynamics, albeit applied to a wholly different realm of inquiry.

[18] Professor Simon coined the term “satisficing” to describe a decision-making strategy that attempts to meet criteria for adequacy, rather than seeking to identify an optimal solution. He noted that a satisficing strategy may often approach optimality if the costs of the decision-making process, including the cost of obtaining complete information, are considered in the outcome calculus; and that decision-makers satisfice because they do not have the cognitive capabilities to optimize.

[19] There is a body of literature with respect to the subject of behavioral accounting, with a number of academic studies, and a journal, Behavioral Research in Accounting, which is published by the Accounting, Behavior and Organizations Section of the American Accounting Association. One who has more recently written extensively on the intersection of behavioral science and accounting is Dr. Sridhar Ramamoorti, an associate professor of accountancy in the Coles College of Business at Kennesaw State University in Kennesaw, Georgia (U.S.).

[20] FASB, op. cit.

[21] This name is a reference to the process used to look up words in a dictionary (a lexicon), whereby the search is first narrowed by focusing on the first letter of the word, and then the second letter, etc. In a lexicographic search, a key attribute (say, reported growth in earnings per share) is used to make a preliminary conclusion, after which a secondary attribute (say, the debt-to-equity ratio) is brought to bear, and so forth. In the abstract, a lexicographic strategy for making, e.g., investment decisions is less desirable than using a strategy that processes all relevant attributes, such as by weighting them according to some predetermined assessment of relative importance. Some research shows that decision-makers may use a less complex model, such as the lexicographic, to do a preliminary screening, then shift to a more complex model once the number of decision alternatives has been winnowed. (See, e.g., Keller and Staelin1987.)

Related Posts