(Downloads - 0)
For more info about our services contact : help@bestpfe.com
Table of contents
1 Introduction
1.1 Current MMOGs limitations
1.2 Contributions
1.3 List of publications
1.4 Organization
2 Decentralized services for MMOGs
2.1 Context
2.2 Matchmaking approaches for MMOGs
2.2.1 Elo ranking
2.2.2 League of Legends case study
2.3 Cheating in MMOGs
2.3.1 Gold farming – use of normal behavior for illegitimate reasons
2.3.2 Trust exploit
2.3.3 Bug/hacks exploit
2.3.4 Cheating by destruction / deterioration of the network
2.4 Cheat detection
2.4.1 Solutions based on hardware
2.4.2 Solutions based on a control authority
2.5 Reputation systems and their mechanics
2.5.1 Effectiveness of a reputation model
2.5.2 Reputation system designs
2.5.3 Countermeasures to reputation systems attacks
2.6 The current state of MMOGs services
3 Towards the improvement of matchmaking for MMOGs
3.1 Gathering data about League of Legends
3.1.1 The nature of the retrieved data
3.1.2 Services used to retrieve data
3.2 Analysis of a matchmaking system
3.2.1 Influence of ranking on waiting times
3.2.2 Impact of the matching distance on the game experience
3.2.3 Impact of latency on the game experience
3.2.4 Crosscheck with data from another game
3.3 Tools for a better player matching
3.3.1 Measuring the quality of a matching
3.3.2 Various algorithms for matchmaking
3.4 Measuring up to an optimal match
3.5 Performance evaluation
3.5.1 Matching capacity
3.5.2 Average waiting time
3.5.3 Matching precision
3.5.4 Adjusting the size of the cutlists
3.5.5 P2P scalability and performance
3.6 Conclusion
4 Scalable cheat prevention for MMOGs
4.1 Scalability issues for cheat prevention
4.2 Design of a decentralized refereeing system
4.2.1 System model
4.2.2 Failure model
4.2.3 Architecture model
4.3 Distributed refereeing protocol
4.3.1 Node supervision
4.3.2 Referee selection
4.3.3 Cheat detection
4.3.4 Multiplying referees to improve cheat detection
4.4 Reputation management
4.4.1 Assessment of the reputation
4.4.2 Parameters associated with my reputation system
4.4.3 Jump start using tests
4.4.4 Reducing the overhead induced by the cheat detection
4.5 Performance evaluation
4.5.1 Simulation setup and parameters
4.5.2 Latency
4.5.3 Bandwidth consumption
4.5.4 CPU load
4.5.5 Cheat detection ratio
4.6 Performance in distributed environments
4.6.1 Evaluation in a dedicated environment
4.6.2 Deployment on Grid’5000 and on PlanetLab
4.7 Conclusion
5 Using reputation systems as generic failure detectors
5.1 Detecting failures with a reputation system
5.1.1 Assessment of the reputation
5.1.2 Parameters associated with my reputation system
5.1.3 The reputation based failure detector
5.1.4 Scalability
5.2 Comparison with other failure detectors
5.2.1 Bertier’s failure detector
5.2.2 Swim’s failure detector
5.2.3 Communication complexity
5.3 Performance evaluation
5.3.1 Experimental settings
5.3.2 Measuring the accuracy of multiple detectors
5.3.3 False positives in the absence of failures
5.3.4 Permanent crash failures
5.3.5 Crash/recovery failures
5.3.6 Overnet experiment: measuring up to a realistic trace
5.3.7 Query accuracy probability
5.3.8 Bandwidth usage
5.3.9 PlanetLab experiment: introducing realistic jitter
5.4 Related work
5.5 Conclusion
6 Conclusion
6.1 Contributions
6.2 Future work



