Sunday, November 8, 2015

Financial technology

Financial technology, also known as FinTech, is a line of business based on using software to provide financial services. Financial technology companies are generally startupsfounded with the purpose of disrupting incumbent financial systems and corporations that rely less on software.
Global investment in financial technology more than tripled to $4 billion in 2013 from $930 million in 2008. The nascent financial technology industry has seen rapid growth over the last few years, according to the office of the Mayor of London. Forty percent of London's workforce is employed in financial and technology services.Some of the better-known FinTech companies in London include FundingCircle, Nutmeg and TransferWise.
In the United States, there are a numerous FinTech startups, including several of the best-known companies, such as Betterment, Lending Club, Prosper, SoFi, Square, and Stripe (along with others, such as LOYAL3, MaxMyInterest, Robinhood and Wealthfront).
In Europe, $1.5 billion was invested in financial technology companies in 2014, with London-based companies receiving $539 million, Amsterdam-based companies $306 million, and Stockholm-based companies receiving $266 million in investment. After London, Stockholm is the second highest funded city in the EU in the past 10 years. 
In the Asia Pacific region, the growth will see a new financial technology hub to be opened in Sydney, Australia, in April 2015.There is already a number of strong financial technology players like Tyro Payments, Nimble, Stockspot, Pocketbook and SocietyOne in the market and the new hub will help further accelerate the growth of the sector. A financial technology innovation lab is also being launched in Hong Kong to help foster innovation in financial services using technology.
Within the academic community, Wharton FinTech was founded at the Wharton School of the University of Pennsylvania in October 2014 with the objective of connecting academics, innovators, investors, and other thought leaders within the FinTech industry to each other and to the ideas that are reinventing global financial services. The University of Hong Kong Law School in conjunction with the University of New South Wales have published a research paper tracing back the evolution of FinTech and its regulation.
The National Digital Research Centre in DublinIreland, defines financial technology as "innovation in financial services", adding that "the term has started to be used for broader applications of technology in the space – to front-end consumer products, to new entrants competing with existing players, and even to new paradigms such as Bitcoin.
In the financial advisory sector, established players such as Fidelity Investments have partnered with financial technology startups such as FutureAdvisor (recently acquired by BlackRock), allowing new technology to work within a prominent custodian. Even celebrities including Snoop Dogg and Nas are beginning to put their resources into the nascent fintech space by investing in financial technology startup Robinhood.
Finance is seen as one of the industries most vulnerable to disruption by software because financial services, much like publishing, are made of bits rather than concrete goods. While finance has been shielded by regulation until now, and weathered the dot-com boom without major upheaval, a new wave of startups is increasingly "disaggregating" global banks.However, aggressive enforcement of the Bank Secrecy Act and money transmission regulations represents an ongoing threat to FinTech companies.
In addition to established competitors, FinTech companies often face doubts from financial regulators. Data security is another issue regulators are concerned about because of the threat of hacking as well as the need to protect sensitive consumer and corporate financial data.Any data breach, no matter how small, can ruin a FinTech company's reputation.The online financial sector is also an increasing target of distributed denial of service extortion attacks. Marketing is another challenge as most FinTech companies as they are often outspent by larger rivals. This security challenge is also faced by historical bank companies since they do offer Internet connected customer services.

Big data

Big data is a broad term for data sets so large or complex that traditional data processing applications are inadequate. Challenges include analysis, capture, data curation, search, sharing, storage, transfer, visualization, and information privacy. The term often refers simply to the use of predictive analytics or other certain advanced methods to extract value from data, and seldom to a particular size of data set. Accuracy in big data may lead to more confident decision making. And better decisions can mean greater operational efficiency, cost reduction and reduced risk.
Analysis of data sets can find new correlations, to "spot business trends, prevent diseases, combat crime and so on.Scientists, business executives, practitioners of media, and advertising and governments alike regularly meet difficulties with large data sets in areas including Internet search, finance and business informatics. Scientists encounter limitations in e-Science work, including meteorology,genomics,connectomics, complex physics simulations.and biological and environmental research
Data sets grow in size in part because they are increasingly being gathered by cheap and numerous information-sensing mobile devices, aerial (remote sensing), software logs, cameras, microphones, radio-frequency identification (RFID) readers, and wireless sensor networks.The world's technological per-capita capacity to store information has roughly doubled every 40 months since the 1980s;as of 2012, every day 2.5 exabytes (2.5×1018) of data were created;The challenge for large enterprises is determining who should own big data initiatives that straddle the entire organization.
Work with big data is necessarily uncommon; most analysis is of "PC size" data, on a desktop PC or notebook that can handle the available data set.
Relational database management systems and desktop statistics and visualization packages often have difficulty handling big data. The work instead requires "massively parallel software running on tens, hundreds, or even thousands of servers".[13] What is considered "big data" varies depending on the capabilities of the users and their tools, and expanding capabilities make big data a moving target. Thus, what is considered "big" one year becomes ordinary later. "For some organizations, facing hundreds of gigabytes of data for the first time may trigger a need to reconsider data management options. For others, it may take tens or hundreds of terabytes before data size becomes a significant consideration.
Big data usually includes data sets with sizes beyond the ability of commonly used software tools to capture, curate, manage, and process data within a tolerable elapsed time.Big data "size" is a constantly moving target, as of 2012 ranging from a few dozen terabytes to many petabytes of data. Big data is a set of techniques and technologies that require new forms of integration to uncover large hidden values from large datasets that are diverse, complex, and of a massive scale.
In a 2001 research report and related lectures, META Group (now Gartner) analyst Doug Laney defined data growth challenges and opportunities as being three-dimensional, i.e. increasing volume (amount of data), velocity (speed of data in and out), and variety (range of data types and sources). Gartner, and now much of the industry, continue to use this "3Vs" model for describing big data. In 2012, Gartner updated its definition as follows: "Big data is high volume, high velocity, and/or high variety information assets that require new forms of processing to enable enhanced decision making, insight discovery and process optimization."] Additionally, a new V "Veracity" is added by some organizations to describe it.
Gartner’s definition of the 3Vs is still widely used, and in agreement with a consensual definition that states that "Big Data represents the Information assets characterized by such a High Volume, Velocity and Variety to require specific Technology and Analytical Methods for its transformation into Value".The 3Vs have been expanded to other complementary characteristics of big data
  • Volume: big data doesn't sample. It just observes and tracks what happens
  • Velocity: big data is often available in real-time
  • Variety: big data draws from text, images, audio, video; plus it completes missing pieces through data fusion
  • Machine Learning: big data often doesn't ask why and simply detects patterns
  • Digital footprint: big data is often a cost-free byproduct of digital interaction
The growing maturity of the concept fosters a more sound difference between big data and Business Intelligence, regarding data and their use
  • Business Intelligence uses descriptive statistics with data with high information density to measure things, detect trends etc.;
  • Big data uses inductive statistics and concepts from nonlinear system identification [26] to infer laws (regressions, nonlinear relationships, and causal effects) from large sets of data with low information density to reveal relationships, dependencies and perform predictions of outcomes and behaviors.
  •  popular tutorial article published in IEEE Access Journal, the authors classified existing definitions of big data into three categories, namely, Attribute Definition, Comparative Definition and Architectural Definition. The authors also presented a big-data technology map that illustrates the key technology evolution for big data.
Big data can be described by the following characteristics
Volume
The quantity of generated data is important in this context. The size of the data determines the value and potential of the data under consideration, and whether it can actually be considered big data or not. The name ‘big data’ itself contains a term related to size, and hence the characteristic.
Variety
The type of content, and an essential fact that data analysts must know. This helps people who are associated with and analyze the data to effectively use the data to their advantage and thus uphold its importance.
Velocity
In this context, the speed at which the data is generated and processed to meet the demands and the challenges that lie in the path of growth and development.
Variability
The inconsistency the data can show at times—-which can hamper the process of handling and managing the data effectively.
Veracity
The quality of captured data, which can vary greatly. Accurate analysis depends on the veracity of source data.
Complexity
Data management can be very complex, especially when large volumes of data come from multiple sources. Data must be linked, connected, and correlated so users can grasp the information the data is supposed to convey.
Factory work and Cyber-physical systems may have a 6C system:
  • Connection (sensor and networks)
  • Cloud (computing and data on demand)
  • Cyber (model and memory)
  • Content/context (meaning and correlation)
  • Community (sharing and collaboration)
  • Customization (personalization and value)
Data must be processed with advanced tools (analytics and algorithms) to reveal meaningful information. Considering visible and invisible issues in, for example, a factory, the information generation algorithm must detect and address invisible issues such as machine degradation, component wear, etc. on the factory floor.
In 2000, Seisint Inc. developed a C++-based distributed file-sharing framework for data storage and query. The system stores and distributes structured, semi-structured, andunstructured data across multiple servers. Users can build queries in a modified C++ called ECL. ECL uses an "apply schema on read" method to infer the structure of stored data at the time of the query. In 2004, LexisNexis acquired Seisint Inc. and in 2008 acquired ChoicePoint, Inc. and their high-speed parallel processing platform. The two platforms were merged into HPCC Systems and in 2011, HPCC was open-sourced under the Apache v2.0 License. Currently, HPCC and Quantcast File Systemare the only publicly available platforms capable of analyzing multiple exabytes of data.
In 2004, Google published a paper on a process called MapReduce that used such an architecture. The MapReduce framework provides a parallel processing model and associated implementation to process huge amounts of data. With MapReduce, queries are split and distributed across parallel nodes and processed in parallel (the Map step). The results are then gathered and delivered (the Reduce step). The framework was very successful, so others wanted to replicate the algorithm. Therefore, an implementation of the MapReduce framework was adopted by an Apache open-source project named Hadoop.
MIKE2.0 is an open approach to information management that acknowledges the need for revisions due to big data implications identified in an article titled "Big Data Solution Offering". The methodology addresses handling big data in terms of useful permutations of data sources, complexity in interrelationships, and difficulty in deleting (or modifying) individual records.
Recent studies show that the use of a multiple-layer architecture is an option for dealing with big data. The Distributed Parallel architecture distributes data across multiple processing units, and parallel processing units provide data much faster, by improving processing speeds. This type of architecture inserts data into a parallel DBMS, which implements the use of MapReduce and Hadoop frameworks. This type of framework looks to make the processing power transparent to the end user by using a front-end application server.
Big Data Analytics for Manufacturing Applications can be based on a 5C architecture (connection, conversion, cyber, cognition, and configuration).
The data lake allows an organization to shift its focus from centralized control to a shared model to respond to the changing dynamics of information management. This enables quick segregation of data into the data lake, thereby reducing the overhead time.
Big data requires exceptional technologies to efficiently process large quantities of data within tolerable elapsed times. A 2011 McKinsey report suggests suitable technologies include A/B testing, crowdsourcing, data fusion and integration, genetic algorithms, machine learning, natural language processing, signal processing, simulation, time series analysis and visualisation. Multidimensional big data can also be represented as tensors, which can be more efficiently handled by tensor-based computation, such asmultilinear subspace learning. Additional technologies being applied to big data include massively parallel-processing (MPP) databases, search-based applications, data mining, distributed file systems, distributed databases, cloud-based infrastructure (applications, storage and computing resources) and the Internet.
Some but not all MPP relational databases have the ability to store and manage petabytes of data. Implicit is the ability to load, monitor, back up, and optimize the use of the large data tables in the RDBMS.
DARPA’s Topological Data Analysis program seeks the fundamental structure of massive data sets and in 2008 the technology went public with the launch of a company calledAyasdi.
The practitioners of big data analytics processes are generally hostile to slower shared storage, preferring direct-attached storage (DAS) in its various forms from solid state drive (SSD) to high capacity SATA disk buried inside parallel processing nodes. The perception of shared storage architectures—Storage area network (SAN) and Network-attached storage (NAS) —is that they are relatively slow, complex, and expensive. These qualities are not consistent with big data analytics systems that thrive on system performance, commodity infrastructure, and low cost.
Real or near-real time information delivery is one of the defining characteristics of big data analytics. Latency is therefore avoided whenever and wherever possible. Data in memory is good—data on spinning disk at the other end of a FC SAN connection is not. The cost of a SAN at the scale needed for analytics applications is very much higher than other storage techniques.
There are advantages as well as disadvantages to shared storage in big data analytics, but big data analytics practitioners as of 2011 did not favour it.
Big data has increased the demand of information management specialists in that Software AG, Oracle Corporation, IBMSAP,EMC, HP and Dell have spent more than $15 billion on software firms specializing in data management and analytics. In 2010, this industry was worth more than $100 billion and was growing at almost 10 percent a year: about twice as fast as the software business as a whole.
, Microsoft,
Developed economies increasingly use data-intensive technologies. There are 4.6 billion mobile-phone subscriptions worldwide, and between 1 billion and 2 billion people accessing the internet. Between 1990 and 2005, more than 1 billion people worldwide entered the middle class, which means more people become more literate, which in turn leads to information growth. The world's effective capacity to exchange information through telecommunication networks was 281 petabytes in 1986, 471 petabytes in 1993, 2.2 exabytes in 2000, 65exabytes in 2007 and predictions put the amount of internet traffic at 667 exabytes annually by 2014. According to one estimate, one third of the globally stored information is in the form of alphanumeric text and still image data, which is the format most useful for most big data applications. This also shows the potential of yet unused data (i.e. in the form of video and audio content).
While many vendors offer off-the-shelf solutions for Big Data, experts recommend the development of in-house solutions custom-tailored to solve the company's problem at hand if the company has sufficient technical capabilities.

Risk management

Risk management is the identification, assessment, and prioritization of risks (defined in ISO 31000 as the effect of uncertainty on objectives) followed by coordinated and economical application of resources to minimize, monitor, and control the probability and/or impact of unfortunate events or to maximize the realization of opportunities. Risk management’s objective is to assure uncertainty does not deflect the endeavor from the business goals.
Risks can come from various sources: e.g., uncertainty in financial markets, threats from project failures (at any phase in design, development, production, or sustainment life-cycles), legal liabilities, credit risk, accidents, natural causes and disasters as well as deliberate attack from an adversary, or events of uncertain or unpredictable root-cause. There are two types of events i.e. negative events can be classified as risks while positive events are classified as opportunities. Several risk management standards have been developed including the Project Management Institute, the National Institute of Standards and Technology, actuarial societies, and ISO standards. Methods, definitions and goals vary widely according to whether the risk management method is in the context of project management, security, engineering, industrial processes, financial portfolios, actuarial assessments, or public health and safety.
Risk sources are more often identified and located not only in infrastructural or technological assets and tangible variables, but also in human factor variables, mental states and decision making. The interaction between human factors and tangible aspects of risk highlights the need to focus closely on human factors as one of the main drivers for risk management, a "change driver" that comes first of all from the need to know how humans perform in challenging environments and in face of risks (Daniele Trevisani, 2007). As the author describes, «it is an extremely hard task to be able to apply an objective and systematic self-observation, and to make a clear and decisive step from the level of the mere "sensation" that something is going wrong, to the clear understanding of how, when and where to act. The truth of a problem or risk is often obfuscated by wrong or incomplete analyses, fake targets, perceptual illusions, unclear focusing, altered mental states, and lack of good communication and confrontation of risk management solutions with reliable partners. This makes the Human Factor aspect of Risk Management sometimes heavier than its tangible and technological counterpart»
Strategies to manage threats (uncertainties with negative consequences) typically include avoiding the threat, reducing the negative effect or probability of the threat, transferring all or part of the threat to another party, and even retaining some or all of the potential or actual consequences of a particular threat, and the opposites for opportunities (uncertain future states with benefits).
Certain aspects of many of the risk management standards have come under criticism for having no measurable improvement on risk, whereas the confidence in estimates and decisions seem to increase. For example, it has been shown that one in six IT projects experience cost overruns of 200% on average, and schedule overruns of 70%.
Widely used vocabulary for risk management is defined by ISO Guide 73, "Risk management. Vocabulary."
In ideal risk management, a prioritization process is followed whereby the risks with the greatest loss (or impact) and the greatest probability of occurring are handled first, and risks with lower probability of occurrence and lower loss are handled in descending order. In practice the process of assessing overall risk can be difficult, and balancing resources used to mitigate between risks with a high probability of occurrence but lower loss versus a risk with high loss but lower probability of occurrence can often be mishandled.
Intangible risk management identifies a new type of a risk that has a 100% probability of occurring but is ignored by the organization due to a lack of identification ability. For example, when deficient knowledge is applied to a situation, a knowledge risk materializes. Relationship risk appears when ineffective collaboration occurs. Process-engagement risk may be an issue when ineffective operational procedures are applied. These risks directly reduce the productivity of knowledge workers, decrease cost-effectiveness, profitability, service, quality, reputation, brand value, and earnings quality. Intangible risk management allows risk management to create immediate value from the identification and reduction of risks that reduce productivity.
Risk management also faces difficulties in allocating resources. This is the idea of opportunity cost. Resources spent on risk management could have been spent on more profitable activities. Again, ideal risk management minimizes spending (or manpower or other resources) and also minimizes the negative effects of risks.

Method

For the most part, these methods consist of the following elements, performed, more or less, in the following order.
  1. identify, characterize threats
  2. assess the vulnerability of critical assets to specific threats
  3. determine the risk (i.e. the expected likelihood and consequences of specific types of attacks on specific assets)
  4. identify ways to reduce those risks
  5. prioritize risk reduction measures based on a strategy

Principles of risk management

The International Organization for Standardization (ISO) identifies the following principles of risk management:
Risk management should:
  • create value – resources expended to mitigate risk should be less than the consequence of inaction, or (as in value engineering), the gain should exceed the pain
  • be an integral part of organizational processes
  • be part of decision making process
  • explicitly address uncertainty and assumptions
  • be a systematic and structured process
  • be based on the best available information
  • be tailorable
  • take human factors into account
  • be transparent and inclusive
  • be dynamic, iterative and responsive to change
  • be capable of continual improvement and enhancement
  • be continually or periodically re-assessed

Composite risk index


The impact of the risk event is commonly assessed on a scale of 1 to 5, where 1 and 5 represent the minimum and maximum possible impact of an occurrence of a risk (usually in terms of financial losses). However, the 1 to 5 scale can be arbitrary and need not be on a linear scale.

The probability of occurrence is likewise commonly assessed on a scale from 1 to 5, where 1 represents a very low probability of the risk event actually occurring while 5 represents a very high probability of occurrence. This axis may be expressed in either mathematical terms (event occurs once a year, once in ten years, once in 100 years etc.) or may be expressed in "plain English" (event has occurred here very often; event has been known to occur here; event has been known to occur in the industry etc.). Again, the 1 to 5 scale can be arbitrary or non-linear depending on decisions by subject-matter experts.

The composite risk index thus can take values ranging (typically) from 1 through 25, and this range is usually arbitrarily divided into three sub-ranges. The overall risk assessment is then Low, Medium or High, depending on the sub-range containing the calculated value of the Composite Index. For instance, the three sub-ranges could be defined as 1 to 8, 9 to 16 and 17 to 25.

Note that the probability of risk occurrence is difficult to estimate, since the past data on frequencies are not readily available, as mentioned above. After all, probability does not imply certainty.

Likewise, the impact of the risk is not easy to estimate since it is often difficult to estimate the potential loss in the event of risk occurrence.
Further, both the above factors can change in magnitude depending on the adequacy of risk avoidance and prevention measures taken and due to changes in the external business environment. Hence it is absolutely necessary to periodically re-assess risks and intensify/relax mitigation measures, or as necessary. Changes in procedures, technology, schedules, budgets, market conditions, political environment, or other factors typically require re-assessment of risks.

No comments:

Post a Comment