Hi David
With the new patch, you introduced Media AI.
So far, one of those AI is making the highest profit.
In a similar way, perhaps a semiproduct focused AI would do well?
They could possibly focus on certain areas such as livestock/farming/mining and resources/electronics/etc
I do see that these AI may not be quite as successful, especially at the beginning, but would probably be quite successful by mid-game
Suggestion: Semiproduct focused AI
-
- Level 6 user
- Posts: 460
- Joined: Thu Jan 13, 2011 5:05 am
- David
- Community and Marketing Manager at Enlight
- Posts: 10431
- Joined: Sat Jul 03, 2010 1:42 pm
- Has thanked: 78 times
- Been thanked: 226 times
Re: Suggestion: Semiproduct focused AI
We have thought about this but the semi-product focused AI is very likely to suffer great losses first 10-20 years or so and it is not guaranteed that it will make money afterwards.
So instead we have made it possible for the corporations to buy items from all seaports in all cities, to increase the variety of semi products available on the market.
So instead we have made it possible for the corporations to buy items from all seaports in all cities, to increase the variety of semi products available on the market.
Re: Suggestion: Semiproduct focused AI
I am curious, after all these years of development and patches, Is AI still incapable of dealing with semi-products and survive? It's not a very lucrative business I am sure, but if it's still possible, especially combined with upper stream raw materials. Is the difficulty of designing such AI comes from the lack of ability to interpret downstream and upstream opponents' intentions? Or is it just a coincidence of want issues where AI can't see the big picture in a larger scope, by holding out resources now where it makes lost, but finding potential customers in the future thus making a profit overall? The above questions all linked to fundamental abilities human able to grasp easily, the level of trust and fairness in cooperative game, and also the ability to assess risks as well as taking risky decision where there's not enough information, people make assumptions to fill in the blank.
In AI design, I suppose something like trust with fairness function could be added, like a matrix of indexes called "favor" and "trustworthy" values to each pair of relationships (it doesn't need to be full 30x30 table, but spare-array with pairs of active trading relationship), they can be used as cooperative indicator to higher levels of cooperation decision, where both party seek balances in mutual profit with different trust involved. This could even become the foundation of contract system, where beyond certain level of trust, or a reference of trust from a trustworthy partners has (like real life vouch system), the trade relationship and agreement of price will last, as long as the level of trust stay. Anything breach of trust can damper the trust. Favor can also play a role added to trust, where it's an additional bonus/minus (like brand added to a product) where it's a factor that can shift quickly to sweet the deal, or make a deal go south. This way we will have a more "varied" relationship between parties. In the long run they have trust built over time (or distrust), but in daily basis, there's no real enemies or real friends when the deal is sweet enough or just bad taste (and of course a long lasting favor can have effect on trust). Many strategy games have similar system, I wonder if this is can be added into Capitalism AIs? (Or something similar already in place but hidden from human players? I sometimes feel AIs have a grace period and certain price tolerance before they switch supplier, and a lot of times I saw AIs purchase something and quickly disconnect the links between each other, this always makes me curious)
The other question is more of a "personality" profit issue where you need to "quantify" risk. This is the basis for most modern financial engineering analysis where everyone want to calculate the risk, and ironically end-up increase the risk overall. There is a Nash equilibrium balance where gains are quantified with pre-deposit value thus result in all kinds of combination in cooperation, retaliation, to mutual destruction, etc. However the willingness of taking risk play a vital role before reaching equilibrium. People have assumptions for them to do the initial assessment when facing situation where there's no precedence. In apply to game theory, there is a complete field of study called Evolutionary Game Theory uses adaptive method to find out how complex behavior observed in real life arise from simple rules. They have many models and simulations can be used a the basis for the risk factor to be integrated into the Capitalism AIs. The real application of this maybe overtime, the different personality of CEO has a starting point with levels orientation towards risk-seeking, neutral, risk-aversion, and different views with different aspect of factors. And some would make general business decision toward long-term target, and some would aim with short term gain (the basis for most financial decision in real life), and different combinations can even form a secondary financial market where some would have positive view and some negative to bet on primary market, not just an uniformed AI default behaviors. And overtime, if some AIs somehow survive their incorrect decisions, they will adaptive toward different orientations, or we can even treated like different generations of the same family taking control the corporation and have learned from ancestor's mistakes (it's funny that current game can run for hundreds of years and stay the same person, but it's a comforting thought that future medical techs can prolong our CEOs life-spam indefinitely).
Over-all if more thought are brought into the design for even better AIs with capability of love and hate (trust/distrust), irrational behaviors (bounded rationality we like to call them, with sense of fairness and assumptions), and learn from mistakes (adaptive profile), will certainly make the game more colorful and interesting. And not necessarily more difficult, but more diverse with different strategies can be achieved. Don't just rely on cold hard numbers, but toward working with the "heart" of AIs, like some strategy games where "manipulate" AI's conception of "reality" is the key. Business executives in real life mostly aren't always iron-clad hard-asses, but people person as well.
In AI design, I suppose something like trust with fairness function could be added, like a matrix of indexes called "favor" and "trustworthy" values to each pair of relationships (it doesn't need to be full 30x30 table, but spare-array with pairs of active trading relationship), they can be used as cooperative indicator to higher levels of cooperation decision, where both party seek balances in mutual profit with different trust involved. This could even become the foundation of contract system, where beyond certain level of trust, or a reference of trust from a trustworthy partners has (like real life vouch system), the trade relationship and agreement of price will last, as long as the level of trust stay. Anything breach of trust can damper the trust. Favor can also play a role added to trust, where it's an additional bonus/minus (like brand added to a product) where it's a factor that can shift quickly to sweet the deal, or make a deal go south. This way we will have a more "varied" relationship between parties. In the long run they have trust built over time (or distrust), but in daily basis, there's no real enemies or real friends when the deal is sweet enough or just bad taste (and of course a long lasting favor can have effect on trust). Many strategy games have similar system, I wonder if this is can be added into Capitalism AIs? (Or something similar already in place but hidden from human players? I sometimes feel AIs have a grace period and certain price tolerance before they switch supplier, and a lot of times I saw AIs purchase something and quickly disconnect the links between each other, this always makes me curious)
The other question is more of a "personality" profit issue where you need to "quantify" risk. This is the basis for most modern financial engineering analysis where everyone want to calculate the risk, and ironically end-up increase the risk overall. There is a Nash equilibrium balance where gains are quantified with pre-deposit value thus result in all kinds of combination in cooperation, retaliation, to mutual destruction, etc. However the willingness of taking risk play a vital role before reaching equilibrium. People have assumptions for them to do the initial assessment when facing situation where there's no precedence. In apply to game theory, there is a complete field of study called Evolutionary Game Theory uses adaptive method to find out how complex behavior observed in real life arise from simple rules. They have many models and simulations can be used a the basis for the risk factor to be integrated into the Capitalism AIs. The real application of this maybe overtime, the different personality of CEO has a starting point with levels orientation towards risk-seeking, neutral, risk-aversion, and different views with different aspect of factors. And some would make general business decision toward long-term target, and some would aim with short term gain (the basis for most financial decision in real life), and different combinations can even form a secondary financial market where some would have positive view and some negative to bet on primary market, not just an uniformed AI default behaviors. And overtime, if some AIs somehow survive their incorrect decisions, they will adaptive toward different orientations, or we can even treated like different generations of the same family taking control the corporation and have learned from ancestor's mistakes (it's funny that current game can run for hundreds of years and stay the same person, but it's a comforting thought that future medical techs can prolong our CEOs life-spam indefinitely).
Over-all if more thought are brought into the design for even better AIs with capability of love and hate (trust/distrust), irrational behaviors (bounded rationality we like to call them, with sense of fairness and assumptions), and learn from mistakes (adaptive profile), will certainly make the game more colorful and interesting. And not necessarily more difficult, but more diverse with different strategies can be achieved. Don't just rely on cold hard numbers, but toward working with the "heart" of AIs, like some strategy games where "manipulate" AI's conception of "reality" is the key. Business executives in real life mostly aren't always iron-clad hard-asses, but people person as well.
-------------------------------------------------------------------
Twitch channel : twitch.tv/ancientbuilder
Youtube channel : www.youtube.com/user/countingtls
-------------------------------------------------------------------
Twitch channel : twitch.tv/ancientbuilder
Youtube channel : www.youtube.com/user/countingtls
-------------------------------------------------------------------