These forums have been archived and are now read-only.

The new forums are live and can be found at https://forums.eveonline.com/

EVE Technology Lab

 
  • Topic is locked indefinitely.
 

Eve-ID.net - Home of EDK

Author
Ammut Irvam
Ministry of War
Amarr Empire
#1 - 2012-09-13 03:49:01 UTC
EVE-ID was created by exi back in 2006 when rig0r released the eve-killboard.net source available to the EVE-Community. Today EVE-ID is used for feedback & suggestions as well as updates/patches & general discussion on EDK which is the core software for most personal, corporation & alliance killboards today.

~Staff~
IRC & Forum Administrator: Dimmizer (Ammut Irvam)
Lead Developer: Minimoo
Developer/SQL Dumps: AndyM

~~~~~~~~~~~~
The problem with everything right now at it's current state is the fact we don't have alot of people to develop or help develop EDK. The other problem we have at this moment in time is EDK is just far out of date in terms of it's core coding. EDK was made originally back in 2004-2005 as EVE-Killboard & were in 2012 almost 2013. It is time for us to either completely recode and restructure EDK so that the code is better for performance and stability as same with the database structure.

EVE-Kill runs it's core based off EDK & we have all seen what has happened. EVE-Kill has had to take some of the key futures of EDK and completely disable them or even remove it & they've even had to make modifications just to keep the site functional because if they didn't we all got 404 Errors all the time. On top of this they have had to split their database into 2 because there is just so many kills it broke SQL.

EDK is a great software but I feel it has lived it's time and we need something new & better.

Creating something new will take time & it will need alot of help since I myself am not a coder/programmer, I've just been an administrator for the forums and IRC channels since it is what I do best. I also know how to work/use EDK and update it etc.. so I do know some stuff but once it comes to actually coding/modifying things in EDK I'm useless.

Don't quote me on this I am only guessing that these are the problems of EVE-Kill & EDK.
~~~~~~~~~~~~

I'm interested in what others think about the current state of EDK. I know PeterPowers is working on KingBoard which is written in MongoDB which I'd love to hear more about and what could the pros v cons be of MongoDB and what about other DB's. How could we (by this I mean) you the developers make a new killboard software that would be able to last us a lifetime.

I will mention this thread to a few others and get them to maybe put their input in on this and hopefully kick something off.
Horak Thor
Angry Mustellid
Lost Obsession
#2 - 2012-09-13 11:38:53 UTC
How would you go about transferring kills from the old edk database to whatever the new form was?
Or am i completely off the mark?

.....

Ammut Irvam
Ministry of War
Amarr Empire
#3 - 2012-09-13 13:42:07 UTC
Well hopefully depending on what database was used unless SQL could be structured more efficiently to work better with killmails which I don't think it can be seeing is how EVE-Kill has already done pretty much everything possible I'd assume we'd convert it from SQL to the new database if it could be? OR you'd have to just do like an ID Feed or some kind of system to pull killmails from old boards or manual post them all over again lol.
Peter Powers
Terrorists of Dimensions
#4 - 2012-09-13 13:46:16 UTC
that depends alot on what database you chose to go with.
as for kingboard, kingboard has a feed import for that -
which while not being the fastest possible option allows to do the import.
but even that has a few problems since eve-kill has quite a few old killmails
of which some are broke beyond reason.

3rdPartyEve.net - your catalogue for 3rd party applications

Khorkrak
The Scope
Gallente Federation
#5 - 2012-09-13 20:34:26 UTC  |  Edited by: Khorkrak
There is no fundamental problem with storing this data in a relational database. The problem is with the database model, database management system and the application code. Using a NoSQL solution is questionable at best because the data is very well structured and simple to model. Map Reduce is simply not going to beat the performance of a decent relational dbms with a well designed model - that is unless you have a number of machines to spread the work out on and even then there are parallel database solutions to spread query processing across machines as well. When your data is highly variable in structure (web pages, random JSON structures etc) then that's a different story.

So design the database as you would a data warehouse. Use a more industrial strength database than MySQL such as PostgreSQL or if you can afford it, Oracle. At the very least stay away from MyISAM if you must use MySQL as it is prone to corruption and does not scale well at all when inserts / updates are involved due to table locks. Use a CDN to cache pages to reduce hits to your site. Write headers properly so pages are cached on browsers. Use varnish and cache objects as well in a system like memcached. I use uWSGI caching for instance.

I've already written a system that does just about everything that the EDK killboard does aside from the following:
- Campaign based kill tracking
- Showing battle / involved ships data (coming soon)
- Enabling user themes
- Allowing manual killmail entry although parsing code exists, just no web page yet for it.

I did this alone in my spare time, an hour or two a night after taking care of my baby daughter and working 9 - 10 hours at full time software development job over the course of about 5 months.

On top of this it does more:
- In depth stats with charts.
- Solo kill rankings
- Breadcrumbs, Prev / Next buttons to scroll through kills
- Filtering on searches (only search for certain ship types).
- SQL and XSS injections are impossible.
- Immune to the plethora of PHP hacks that constantly hit web sites.

The code is modern and modular. It's split into separate projects as well. There's a library with scripts for the backend stuff. There's DDL for both MySQL and PostgreSQL. There are no hard coded dependencies on the database engine used. There's a front-end project that just implements the web side of the application. So a completely different front end could be written relatively easily, reusing all the backend code to deal with the heavy lifting.

On top of this it runs very fast and can display stats about pilots, corps and alliances that are impossible to do efficiently with the EDK system as it's written.

Project
http://sourceforge.net/projects/pykb/

Sample Site
http://www.decloaked.com

When you feel the urge to use a NoSQL solution read this first to be sure that you're not doing something stupid as you most likely are about to - aka NoSQL is not new it was the way things were done in the 60's (back to the future).
http://www.dbdebunk.blogspot.com/

Example parallel SQL database implementation enabling queries to be split up across machines.
For PostgreSQL
http://sourceforge.net/projects/gridsql/
http://www.pgpool.net/mediawiki/index.php/Main_Page

For MySQL
http://spiderformysql.com/product.html

I'm using just a single system for the site and the database as it's still small for now. I'll likely split them into two as it grows. Also please stop writing code in PHP for great justice. Pick any other language (ok not Perl either) and it'll be better for the world.

Developer of http://www.decloaked.com and http://sourceforge.net/projects/pykb/

Ammut Irvam
Ministry of War
Amarr Empire
#6 - 2012-09-13 22:17:57 UTC  |  Edited by: Ammut Irvam
Khorkrak wrote:
There is no fundamental problem with storing this data in a relational database. The problem is with the database model, database management system and the application code. Using a NoSQL solution is questionable at best because the data is very well structured and simple to model. Map Reduce is simply not going to beat the performance of a decent relational dbms with a well designed model - that is unless you have a number of machines to spread the work out on and even then there are parallel database solutions to spread query processing across machines as well. When your data is highly variable in structure (web pages, random JSON structures etc) then that's a different story.

So design the database as you would a data warehouse. Use a more industrial strength database than MySQL such as PostgreSQL or if you can afford it, Oracle. At the very least stay away from MyISAM if you must use MySQL as it is prone to corruption and does not scale well at all when inserts / updates are involved due to table locks. Use a CDN to cache pages to reduce hits to your site. Write headers properly so pages are cached on browsers. Use varnish and cache objects as well in a system like memcached. I use uWSGI caching for instance.

I've already written a system that does just about everything that the EDK killboard does aside from the following:
- Campaign based kill tracking
- Showing battle / involved ships data (coming soon)
- Enabling user themes
- Allowing manual killmail entry although parsing code exists, just no web page yet for it.

I did this alone in my spare time, an hour or two a night after taking care of my baby daughter and working 9 - 10 hours at full time software development job over the course of about 5 months.

On top of this it does more:
- In depth stats with charts.
- Solo kill rankings
- Breadcrumbs, Prev / Next buttons to scroll through kills
- Filtering on searches (only search for certain ship types).
- SQL and XSS injections are impossible.
- Immune to the plethora of PHP hacks that constantly hit web sites.

The code is modern and modular. It's split into separate projects as well. There's a library with scripts for the backend stuff. There's DDL for both MySQL and PostgreSQL. There are no hard coded dependencies on the database engine used. There's a front-end project that just implements the web side of the application. So a completely different front end could be written relatively easily, reusing all the backend code to deal with the heavy lifting.

On top of this it runs very fast and can display stats about pilots, corps and alliances that are impossible to do efficiently with the EDK system as it's written.

Project
http://sourceforge.net/projects/pykb/

Sample Site
http://www.decloaked.com

When you feel the urge to use a NoSQL solution read this first to be sure that you're not doing something stupid as you most likely are about to - aka NoSQL is not new it was the way things were done in the 60's (back to the future).
http://www.dbdebunk.blogspot.com/

Example parallel SQL database implementation enabling queries to be split up across machines.
For PostgreSQL
http://sourceforge.net/projects/gridsql/
http://www.pgpool.net/mediawiki/index.php/Main_Page

For MySQL
http://spiderformysql.com/product.html

I'm using just a single system for the site and the database as it's still small for now. I'll likely split them into two as it grows. Also please stop writing code in PHP for great justice. Pick any other language (ok not Perl either) and it'll be better for the world.


Hey m8 current at this moment there is 14.361.284 killmails in the EVE-Kill Database. How would or how do you think your killboard would do with that many killmails & queries? people constantly looking @ old killmails etc..

Thats the main thing I'm looking for is a good killboard but one that wont lock up due to alot of queries and ofc the load of km's

I LOVE your work btw :D its very nice and smooth. Could some things maybe be added from EDK? I'd like to keep the oringal or at least basic features of EDK but just make it so it works and doesnt crash/die under load etc..
Steve Ronuken
Fuzzwork Enterprises
Vote Steve Ronuken for CSM
#7 - 2012-09-13 22:32:51 UTC
You really want to institute caching.

Just check to see if it's cached when someone wants to look at a killmail, and if it's not, get the data from the database, then cache it. That way, people sharing kill links adds a minimal load.

memcache is nigh perfect for this. The first hit on a killmail will be relatively expensive. Every one after that will be cheap until it ages out.

As for how well a SQL based kill board would be, umm, 14 million entries isn't really that much. You just need to have appropriate partitioning and indexing.

Before deciding how to store the data, build your user stories. How are people going to want to retrieve the data. Every kill on a person? Every kill someone was part of? and so on.

Don't just cram everything into a single table. Something like:

Kill table:
Kill id, victim id, kill date, ship id, location, kill value.

Drop table:
pk, Kill id (fk), drop list as a json blob (unless you want to search on what dropped.)

Killer table.
pk, Kill id (fk), killer id, ship id, weapon id, damage, final blow boolean.

Stay away from MyISAM and you're probably fine. (MyISAM does badly on locking, locking the entire table when it needs to do something other than select).

Woo! CSM XI!

Fuzzwork Enterprises

Twitter: @fuzzysteve on Twitter

Karbowiak
The League Of Unwanted Capsuleers
#8 - 2012-09-13 22:43:51 UTC
Steve Ronuken wrote:
You really want to institute caching.

Just check to see if it's cached when someone wants to look at a killmail, and if it's not, get the data from the database, then cache it. That way, people sharing kill links adds a minimal load.

memcache is nigh perfect for this. The first hit on a killmail will be relatively expensive. Every one after that will be cheap until it ages out.

As for how well a SQL based kill board would be, umm, 14 million entries isn't really that much. You just need to have appropriate partitioning and indexing.

Before deciding how to store the data, build your user stories. How are people going to want to retrieve the data. Every kill on a person? Every kill someone was part of? and so on.

Don't just cram everything into a single table. Something like:

Kill table:
Kill id, victim id, kill date, ship id, location, kill value.

Drop table:
pk, Kill id (fk), drop list as a json blob (unless you want to search on what dropped.)

Killer table.
pk, Kill id (fk), killer id, ship id, weapon id, damage, final blow boolean.

Stay away from MyISAM and you're probably fine. (MyISAM does badly on locking, locking the entire table when it needs to do something other than select).


You do realize this is what EDK does ?
inv_all / inv_crp holds kills where alliance and corporations are involved
inv_detail holds both inv_all and inv_crp stuff, plus some more data.
kills holds the victim data and is the main lookup table
items_dropped/items_destroyed holds the items

the problem is that when you get alot of entries, those tables grow large, and EDK constantly joins those tables, tables that can easily grow to being +10GB in size, each.

Obv. some of the performance can be negated by partitioning it, but even then, one false move in a query and it does a full table scan.

This said, EDK does all of this very horribly default, and im very much aware that 10GB table size isn't that large compared to what enterprise stuff must be doing. But they also have more beefy hardware at hand than we have :P

As for the table, we're using innodb (xtradb actually) on Percona's version of MySQL, but i'm contemplating switching to MariaDB seeing as it has some more optimizations for joins.
Anyway, Kingboard is fast even with millions of killmails, and that's on a relatively underpowered virtualmachine. Once Kingboard is finally done, and EVE-KILL can begin using it, we'll throw some beefy hardware at it, no worries! :)

Forgot to mention we also use varnish and memcache on eve-kill..
Steve Ronuken
Fuzzwork Enterprises
Vote Steve Ronuken for CSM
#9 - 2012-09-13 22:53:04 UTC
Karbowiak wrote:
Steve Ronuken wrote:
You really want to institute caching.

Just check to see if it's cached when someone wants to look at a killmail, and if it's not, get the data from the database, then cache it. That way, people sharing kill links adds a minimal load.

memcache is nigh perfect for this. The first hit on a killmail will be relatively expensive. Every one after that will be cheap until it ages out.

As for how well a SQL based kill board would be, umm, 14 million entries isn't really that much. You just need to have appropriate partitioning and indexing.

Before deciding how to store the data, build your user stories. How are people going to want to retrieve the data. Every kill on a person? Every kill someone was part of? and so on.

Don't just cram everything into a single table. Something like:

Kill table:
Kill id, victim id, kill date, ship id, location, kill value.

Drop table:
pk, Kill id (fk), drop list as a json blob (unless you want to search on what dropped.)

Killer table.
pk, Kill id (fk), killer id, ship id, weapon id, damage, final blow boolean.

Stay away from MyISAM and you're probably fine. (MyISAM does badly on locking, locking the entire table when it needs to do something other than select).


You do realize this is what EDK does ?
inv_all / inv_crp holds kills where alliance and corporations are involved
inv_detail holds both inv_all and inv_crp stuff, plus some more data.
kills holds the victim data and is the main lookup table
items_dropped/items_destroyed holds the items

the problem is that when you get alot of entries, those tables grow large, and EDK constantly joins those tables, tables that can easily grow to being +10GB in size, each.

Obv. some of the performance can be negated by partitioning it, but even then, one false move in a query and it does a full table scan.

This said, EDK does all of this very horribly default, and im very much aware that 10GB table size isn't that large compared to what enterprise stuff must be doing. But they also have more beefy hardware at hand than we have :P

As for the table, we're using innodb (xtradb actually) on Percona's version of MySQL, but i'm contemplating switching to MariaDB seeing as it has some more optimizations for joins.
Anyway, Kingboard is fast even with millions of killmails, and that's on a relatively underpowered virtualmachine. Once Kingboard is finally done, and EVE-KILL can begin using it, we'll throw some beefy hardware at it, no worries! :)

Forgot to mention we also use varnish and memcache on eve-kill..


I've not looked at EDK Smile

I'm not too surprised though. I have found a lot of the time, joining tables during the initial selection phase can be a painful thing to do. Sometimes it works better just passing in a list of ids. That only helps when the optimizer is being braindead, of course.

Woo! CSM XI!

Fuzzwork Enterprises

Twitter: @fuzzysteve on Twitter

Dragonaire
Here there be Dragons
#10 - 2012-09-14 04:40:37 UTC
Just a couple things I remember from when a friend and I looked at some of the problems with EDK a year or two ago.

Most of the tables aren't indexed correctly i.e. the columns used in the joins aren't the ones with the indexes or not the best columns to join on. The tables aren't normalized correctly also as I remember and should be updated to match the Eve API as that's the main source of the data and would mean less inserts needed and less joins as well. Some of the queries just didn't make much sense as they pulled data that was never used.

There are several problems with the feed syndication and it would probably make more sense to move to something new based on JSON IMHO anyway, that is actually defined somewhere like was done with market log at http://dev.eve-central.com/unifieduploader/start which has proven to be a big boost to data sharing for all of the marketing folks. Also to cut down on made up kills cross check all kills from multiple sources and rate them based on that.

Those are what came to mind in a few minutes time to us but it should give you some ideas where to start Blink

I'll also agree with Steve Ronuken and don't use MyISAM but InnoDB or something else for the tables where you can use some transactions which should speed up many of the inserts etc. You might also look at MariaDB with another DB engine like TokuDB which is made to work with very large datasets and would allow the additional indexes to speed up some of the queries.

I don't agree that you need to change from PHP though just update it to use a better class structure and use a bytecode cache. That along with some DB query caching will help a lot.

Some user stories is a good idea so you do get a much better idea how to do everything to make the user experience better as well.

Anyway there's some ideas for you to kick around Blink

Finds camping stations from the inside much easier. Designer of Yapeal for the Eve API. Check out the Yapeal PHP API Library thread.

Peter Powers
Terrorists of Dimensions
#11 - 2012-09-14 07:42:37 UTC
Khorkrak wrote:
Using a NoSQL solution is questionable at best because the data is very well structured and simple to model. Map Reduce is simply not going to beat the performance of a decent relational dbms with a well designed model - that is unless you have a number of machines to spread the work out on and even then there are parallel database solutions to spread query processing across machines as well.

Incremental Map/Reduces work fine aswell. For creating aggregated statistics for example, i don't need to run the same Map/Reduce with the existing data again, i simply run a Map over the new data and Reduce it with the previous run.




3rdPartyEve.net - your catalogue for 3rd party applications

Khorkrak
The Scope
Gallente Federation
#12 - 2012-09-14 11:32:56 UTC  |  Edited by: Khorkrak
These NoSQL data stores are seductive because they solve the impedance mismatch issue nicely by storing the data similarly or even identically to the way it's represented in the application code. There's no conversion from a hash / dict / array / list structure to a relation needed. They scale nicely because you can just add nodes to them and the load is automatically distributed. They're meant for huge data stores spread across multiple machines. The problem though stems from these systems not having a solid theoretical foundation - they're not based on a model that's been rigorously examined unlike set theory, predicate calculus and relational algebra. Also each one has a completely different querying language.

Why does this matter? Well a lot more hidden complexity is introduced and subtle errors can occur that may go unnoticed for quite a while. In MongoDB, how do you define foreign key relationships to ensure consistency? Simple, you can't and don't. How do you do joins? You can't - at least not declaratively, instead you must iterate through the data.

http://cookbook.mongodb.org/patterns/pivot/

What about transactions? Well there's some support for that but it's not like it is in even the most primitive widely used relational database management systems such as MySQL.

So you sacrifice sanity and simplicity from a data integrity perspective for the ability to do map / reduce and have your data be represented in a more familiar way by having your data store structures that very closely match your application ones in form. The former benefit is acceptable when you just can't define a fixed schema - Google Search using Big Table. It's a smart trade off there. In many other cases it's just ignorance and excitement about not having to represent things in a relational way. The problem can instead be solved more intelligently by understanding how to model a database properly, knowing how to define indexes correctly and having the experience to judge what makes one database management system superior to others. To solve the impedance mismatch problem along with decoupling your application code from a particular database's SQL dialect you use an ORM.

Can I say that the system I wrote will handle 14+ million kills, 150+ million plus kill items (dropped / destroyed) and so on? Most likely. We'll see over time. I don't have access to that volume of data to test with although I'm considering generating test data now to give it a try. I tested it initially with about 200,000 killmails and I found some issues that I fixed by adjusting indexes and some queries. The production system started from scratch with just my personal and corp killmail history. It's now at about 40,000 kill mails, 500,000 dropped / destroyed items, 450,000 attackers, 55,000 characters, 10,000 corporations and 1,500 alliances. No performance issues yet - still quite small of course.

Developer of http://www.decloaked.com and http://sourceforge.net/projects/pykb/

Shellac Brookdale
Cutting Edge Incorporated
RAZOR Alliance
#13 - 2012-09-14 13:13:55 UTC
MongoDB is a great solution to store this kind of data as each kill that is being pulled through the API is already represented as a document. You just have to convert it into json and dump it 1:1 into MongoDB. Data consistency and transactions are a non-issue in this case. You can't bet on any constistency anyways with the EVE API. Theres always cases where IDs won't exist, but you need to store the KM anyways to be able to look into it later to solve the problem. Same for the data model. CCP may introduce changes to the KM format on very short notice. With a schemaless database you can just dump the whole KM and take care about added elements later. With a traditional RDBMS you need to keep your database schema updated making it harder to adapt to changes than it should be.
Trenker
#14 - 2012-09-14 13:45:41 UTC
Just mentioning things like "joins" and "foreign keys" shows to me that you did not fully got into the nosql thing yet.

In the beginning, Ammut Irvam mentioned MongoDB since Peter Powers implemented its useage into the currently developed kingboard.

But there are others out there as well, like CouchDB or others.

I for one currently play around with the sdd and solr, now that is blazingly fast!!

For a true successor of EDK however, you cannot simply jump onto the latest hype-technology, because it has to run "everywhere".
The base of EDK are PHP and MySQL, which is available on almost every shared host. That's maybe the main reason for EDKs success. Only drop this if there is no other way around!
And that's why I have to agree with Khorkrak in his statement, that optimization of the schema is maybe the best approach in this case.

Using some smart caching mechanisms, using multiple caching levels (not only cache the whole page or build up everything from scratch, but eg.: cache user -data, ship/item stats etc.), use lazy loading (don't lookup and render 100 attackers if the user can see only 5), use the rendering power of the client instead of doing all the html on the server (use localStorage, indexedb, ajax requests with a far future expires header), ....
Combined you won't drop support for all those killboard operators that don't have fancy hardware, yet have a shiny and fast system for big players like eve-kill.
Khorkrak
The Scope
Gallente Federation
#15 - 2012-09-14 14:52:44 UTC
It's 2012. Long gone are the days of having to use the worst possible choices for a programming language and relational database because that's all that was available from ISPs.

Sadly it's also not a case of "I don't get NoSQL", it's simply that most developers do not have a deep understanding of the mathematical foundation of relational database theory. There is no better solution. It's based on set theory and predicate calculus. It's already been proven decades ago to be the most efficient way to store inter-related data. No matter how much time passes 2 + 2 will continue to equal 4. Sorry.

Database management system implementations and the querying language, SQL are a different story. Those certainly can and need to be improved. Going back to the old ways of hierarchical data stores and systems that may run faster right now based on those in some cases is unwise.

Developer of http://www.decloaked.com and http://sourceforge.net/projects/pykb/

Dragonaire
Here there be Dragons
#16 - 2012-09-14 16:36:56 UTC
Always interesting to me when someone starts talking about how PHP is the worse language it's always a Python person saying it Blink Python is one of only a couple languages I even found difficult to use the other being Perl .

There are places for things like MongoDB etc like for the problems it was made to solve but all to many people are like the guy with a new hammer. Everything looks like a nail to them Blink CCP has already done half of the work for us using a XML API based on a relational DB back end and putting the data into anything else but a relational DB on our end is just throwing away the advantages there are with that structure.

Finds camping stations from the inside much easier. Designer of Yapeal for the Eve API. Check out the Yapeal PHP API Library thread.

Ammut Irvam
Ministry of War
Amarr Empire
#17 - 2012-09-14 21:42:35 UTC  |  Edited by: Ammut Irvam
Ok so what needs to be done so that EDK wont lockup due to mass queries etc.. cause this is the main problem I'm having and it'd be nice if anyone else would be interested in helping with the development of EDK or even starting a new killboard software altogether.

Please get in contact with me via eve-id.net on the forums by sending me a PM or on IRC @ irc.coldfront.net w/channel #eve-id

Edit: From what I gather from Karbowiak nothing can be done really since it uses a ****** dbms
Steve Ronuken
Fuzzwork Enterprises
Vote Steve Ronuken for CSM
#18 - 2012-09-14 22:05:55 UTC
The very first thing:

If the tables are in MyISAM format, you need to convert them to InnoDB. This will complicate backups a little, as you won't be able to use mysqlhotcopy any more.

If the table is purely for lookups, and never gets updated, you can leave it alone.

Might be handy.
http://dev.mysql.com/doc/refman/5.0/en/converting-tables-to-innodb.html


The downside of conversion is that it'll take quite a while with big tables. What might be worth doing is using binary logging, to log the changes that happen to the live during the conversion, then bring a converted copy of the table up to date. Back them up, then restore them to another database, or to different names, convert it, then rename, before applying the changes. During that time, you'll want to be offline.

(whatever you do, don't do it without having a backup. That's just asking to lose everything.)

MyISAM, when it needs to do an update or insert, locks the entire table exclusively. If there's a long select running, you'll end up with the lock stopping any other selects from being started until the running ones, and the update, complete.


Other things to help:

Turn on the slow query log. Look at the queries happening and see if there's anything that should be indexed. Explain plans are your friend. Anything that's being used to select data (after the where) should have an index. Be prepared to create indexes based on multiple columns. Most selective to least selective. (index creation is a mix of science and art.


If a column cannot be null (by your database design) make sure you have a not null constraint. It will change how the optimizer works. If you're using null as an actual useful data value, shoot the person who designed the database. ( where X is null, is a good reason to call a firing squad)

If you're running things through functions on the column side, that'll have a negative impact on performance too. Something like postgres might be a good idea for migration, as it'll support indexes on that kind of thing.

Woo! CSM XI!

Fuzzwork Enterprises

Twitter: @fuzzysteve on Twitter

Ammut Irvam
Ministry of War
Amarr Empire
#19 - 2012-09-14 22:17:13 UTC
I'll send this thread to the 2 developers and see what they say, thanks Steve.
Johnathan Roark
Quantum Industries
#20 - 2012-09-15 04:48:59 UTC
Ammut Irvam wrote:
Ok so what needs to be done so that EDK wont lockup due to mass queries etc.. cause this is the main problem I'm having and it'd be nice if anyone else would be interested in helping with the development of EDK or even starting a new killboard software altogether.

Please get in contact with me via eve-id.net on the forums by sending me a PM or on IRC @ irc.coldfront.net w/channel #eve-id

Edit: From what I gather from Karbowiak nothing can be done really since it uses a ****** dbms


Are you looking for evekill.net size or just for a corp or alliance? Either way, I don't think your going to "fix" EDK, you'll end up rewriting everything anyway. Best to just start clean and work the things in from EDK you want. Last time I looked at EDK, the queries where bad because the code was bad. It can be done with an Relational database, it just has to be designed correctly.

The biggest thing I always wanted was signed kill mails in the feed syndication with api style feeds or json so if a server was sending bad mails, you could block those mails.

EVEVERIFY - A recruiting API Verification and Audit Tool

Also try out Yapeal for your php api needs