Inhalt des Dokuments
Digital platforms are becoming increasingly important. Being the worldwide leading business model, three of the five biggest stock listed companies are based on a platform approach. A platform can be described as a business model allowing for value-adding interactions between consumers and providers. These business models are also referred to as two-sided or multi-sided markets.
Two-sided markets complement and extend traditional mode of consumption in many domains already today. Examples include short-term accommodation sharing, crowd work, delivery services, resale- and auction platforms, as well as ride sharing markets.
The core of digital platforms’ business model is the systematic aggregation, processing and presentation of a large amount of data to gain attention for a specific content, as done by e.g. searching machines, comparison and rating portals, and social networks.
Digital platforms facilitate interaction (and transaction) between consumers and providers by means of information technology, enabling easy access to the platform for all market players. Market players are, in principal, consumers, suppliers, as well as the platform service provider.
Central to the design and operation of digital platforms is the existence of network effects: Providers benefit from many consumers (demand); alike, consumers benefit from many providers (supply, competition). Therefore, platforms can be described as „systems with positive feedback loops“, as with every new actor on the platform, the value for all remaining participants increases.
Our main focus of research in this field is the analysis and the design of digital platform ecosystems.
Crowdfunding can be described as an internet-based finance and investment tool, through which a high number of individuals supports a project financially with relatively small contributions. In equity-based crowdfunding (ECF), the contribution made by mainly unprofessional (small) investors is linked to the participation in a project’s future profit. It thus enables ventures to acquire equity or mezzanine capital through an intermediary online platform.
ECF is discussed as a viable means with the potential to democratize finance, providing easy access to capital, particularly for seed and startup companies. However, the highly unregulated market suffers from cases of fraud and misconduct.
Scientific research on ECF is still in its infancy. Main areas of investigation are so far limited to the role of ECF compared to traditional sources of finance, legal issues, characteristics of capital seekers and investors, as well as funding mechanisms. In this respect, little has been done to understand the functioning and impact of platform service providers. As „network orchestrators“, platforms match capital demand and supply, however, they also shape the terms and conditions for transactions in the industry. Moreover, the pre- and after-funding phase, including campaign pre-selection, due diligence, post-campaign performance and trading of shares, are „blind spots“ in scientific research.
Our main research topics in this field are:
- User technology acceptance,
- Peer-investor interaction and source credibility
Online Trust & Reputation
Trust is the willingness to depend on others. Trusting beliefs are:
Different user types und motives, and the paramount importance of trust lead to reputation systems and management as being important drivers for platform growth and survival.
Our research on online trust and reputation focuses on the following topics:
- What creates trust?
- Economic value of reputation
- Cross-platform data and reputation portability
- Reputation positivy bias
We investigate which elements or display formats of an online platform create trust, both among customers and retailers. Furthermore, we are exploring how real-estate agents or hosts need to present themselves in order to be chosen by customers. A good example in this regard is Airbnb, a platform for vacation rentals whose entire design is based on findings gained from research into trust in digital services.
Trust in Artificial Intelligence
Artificial intelligence (AI) and machine learning (ML) have become increasingly important tools to aid human decision making in various fields. For an increasing number of applications, ML-trained algorithms achieve performance levels comparable to or even surpassing that of humans. In light of this rapid progress of cognitive automation, the role of human decision makers, however, represents an often-overlooked factor. Specifically, human users’ trust in AI systems should to be taken into account for at least three reasons.
First, trust functions as an important prerequisite of technology acceptance, adoption, and use in general and for AI in particular. It is not surprising that people hesitate to put their own or others’ lives in the hands of an AI assistant, especially when assistants make decisions without providing any reasoning for choosing one solution over another.
Second, even once professionals have adapted AI-based support systems to inform their decisions, trust into these systems will be a prerequisite for them to actually base their decisions on the system’s predictions, classifications, and recommendations. After all, the role of human decision makers is almost always vital since humans still have the final say on how to proceed with the AI’s recommendation.
Third, understanding the processes governing human trust in AI is crucial to counteract the potential ramifications and side-effects of 1) mistakenly denied and 2) unfounded trust. If humans increasingly leverage AI to inform, derive, justify decisions, it also becomes important to quantify when, how, and why they overly or blindly trust and mistrust those systems.
Information bubbles refer to a state of intellectual isolation in which only certain information is processed. The term refers primarily to the consumption of online news and press articles, but also to the perception of opinions and posted content by friends and acquaintances in social online networks (e.g., Facebook).
Similarly, information bubbles may also result from personalized search, for example, if algorithms selectively attempt to predict which information users are likely prefer or consume on the basis of their past behaviour (e.g., click behavior, search history) or other data such as age, location, etc. Other possible causes of information bubbles are so-called collaborative filtering algorithms, such as employed by Amazon. These also attempt to forecast which products a user might be interested in based on their own purchases and searches as well as those of other users (“Users who bought this product also bought XYZ”).
In addition to such technically induced information bubbles, however, humans naturally tend to consume content that they are familiar with (familiarity bias), whose authors are similar to them (similarity bias), or whose statements they agree with in the basic tendency (confirmation bias). This can lead to a consolidation of existing opinions and a decrease in diversity of consumed content. Ultimately, users are separated from information and other users who do not agree with their views, interests and/or characteristics and are effectively isolated in their respective cultural, ideological. and demographic bubbles.
A look at today’s digital landscape reveals that the presence of computer agents in everyday life has increased continuously over the last two decades. Popular examples include digital assistants (e.g., Google Duplex) and conversational agents for customer and retail services (Alexa, Siri). These computer agents interfere with humans in various contexts; better understanding the specific interaction between humans and computer agents is hence of utmost importance. Specifically, as humans behave differently when dealing with computer counterparts rather than humans, not only does the counterpart’s actual nature matter, but also the human perception of it.
In the 1960ies, Joseph Weizenbaum proposed the presumably first text-based conversational agent (CA) named ELIZA and since then, research investigated the design and outcomes of humans interacting with CAs in different contexts. Since CAs enable users to interact with computers through natural language (i.e., a central human quality) and may also display additional characteristics commonly associated with humans (e.g., a human-like appearance or names), users often respond to them socially. To explain this observation, most studies build on the Computers are Social Actors (CASA) paradigm, stating that humans interacting with computers exhibit social behaviors that are similar to those observed in human-human interaction if the computer exhibits some degree of socialness, induced by the presence of social cues. Importantly, the presence of social cues affects human perceptions of a CA’s social presence and determines the success of a long-term relationship between a CA and a human.
The literature on human-computer interaction (HCI) has established that the perception of one’s transaction partner’s nature (i.e., human or computer) can significantly affect one’s decisions and behavior (e.g., see here  and here , that is, knowing whether one deals with another human or with a computer agent – or just not knowing it. Especially this latter case has sparked vivid technical, social, and ethical discussions since Google presented its Duplex assistant, capable of making natural language phone calls (e.g., for making table reservations in restaurants) without being recognized as an artificial entity by the humans at the other end of the line. One’s believe of either dealing with another sentient human being or a computer, but also their uncertainty about this very fact is hence likely to affect how they behave and treat an interaction partner. Thus, economic behavior and outcomes can also be expected to depend on this fact. This perspective, however, has thus far remained mostly uncharted in the literature.