These forums have been archived and are now read-only.

The new forums are live and can be found at https://forums.eveonline.com/

EVE General Discussion

 
  • Topic is locked indefinitely.
 

Project Discovery: Challenging & Interesting Samples

First post
Author
HPA Illuminator
H P A
C C P Alliance
#21 - 2016-03-10 15:07:56 UTC
Kali Starchaser wrote:
HPA Illuminator wrote:
Kali Starchaser wrote:
Here is a fun one I would LOVE feedback on, because its confusing the crap out of me if I should checkmark 'Abnormal sample' or not before I submit it.


http://puu.sh/nBzMy/25bba5ddad.jpg



Nice one! I wouldn't say it's abnormal. It's a cell to cell variation pattern of plasma membrane. Possibly some cytoplasm too (but likely not) - if the green staining is perfectly uniform and flat all over the cell when you toggle off red and blue I would say it's only PM.



http://puu.sh/nBAGg/5e1f54ab69.jpg blue + green
http://puu.sh/nBANk/f5a4f5bac8.jpg green only
http://puu.sh/nBAKB/c67b6f994b.jpg red + blue

I kinda took a bunch of screenshots of it for my oddity folder. :)


Hehe good thinking. Definitely a plasma membrane staining with no cytoplasm (at least none that can be seen with this optical sectioning of the cell).
Terminal Insanity
KarmaFleet
Goonswarm Federation
#22 - 2016-03-11 00:29:57 UTC  |  Edited by: Terminal Insanity
Yume Mei wrote:

Im pretty sure you selected the correct one... its just gonna take time for people to get good at this, and for the system to weed out the bad ones =p



was i was right in selecting the two categories for this one? Previously i was only selecting Nucleus on samples like this because i felt it was the prominent feature, but now as ive went though some of these and getting more comfortable, im thinking this should be both Nucleus and Cytoplasm?
http://i.imgur.com/iYVeStn.png

I went with Cytoplasm and not Vesicles because the staining in the red part was more blurry, not really defined dots


Im also curious what the difference between an "Abnormal sample" and "cell to cell variation"/"not identifiable" would be

"War declarations are never officially considered griefing and are not a bannable offense, and it has been repeatedly stated by the developers that the possibility for non-consensual PvP is an intended feature." - CCP

Helios Estraella
School of Applied Knowledge
Caldari State
#23 - 2016-03-11 00:48:43 UTC
Terminal Insanity wrote:


I went with Cytoplasm and not Vesicles because the staining in the red part was more blurry, not really defined dots


Im also curious what the difference between an "Abnormal sample" and "cell to cell variation"/"not identifiable" would be


If you look at all the cytoplasm examples the nuclei are pretty much void of green. In your situation the nucleus is filled with green so I wouldn't categorize this as cytoplasm. But I might as well be wrong.
Nevyn Auscent
Broke Sauce
#24 - 2016-03-11 01:57:19 UTC
http://i.imgur.com/tkkrx53.png Try and work that one out for a laugh.
Tzar Sinak
Mythic Heights
#25 - 2016-03-11 02:06:27 UTC
I really hope a sub forum can be created in the Game Center section. The discovery threads are too god to loose in the clutter

Hydrostatic Podcast First class listening of all things EVE

Check out the Eve-Prosper show for your market updates!

SurrenderMonkey
The Exchange Collective
Solyaris Chtonium
#26 - 2016-03-11 02:38:41 UTC
Nevyn Auscent wrote:
http://i.imgur.com/tkkrx53.png Try and work that one out for a laugh.


Could be mitochondria or a golgi apparatus? I think it's something on top of the nucleus (the red filaments running through the nucleus are a giveaway), not actually in it.

"Help, I'm bored with missions!"

http://swiftandbitter.com/eve/wtd/

Annemariela Antonela
Sebiestor Tribe
Minmatar Republic
#27 - 2016-03-11 03:31:36 UTC
I am an Advanced Analyst now.

I'm doing Science.

“Culture is like a smog. To live within it, you must breathe some of it in and, inevitably, be contaminated.”

― Richard K. Morgan, Altered Carbon

Van Dracon
Doomheim
#28 - 2016-03-11 05:45:19 UTC
The new feature has caused many inconsistent results. Especialy with basic findings where you are 100% accurate but get a failed result. This has baffled me if I should continue wasting my time in project failure.

For starters the pictures should be increased in size instead of looking through a key hole.

Can we get 3d images if possible.

Why did I fail with one of the results. Please give back users more explanatory information to educate them more on failed results instead of guessing.


I like to help like many others though I believe the new feature needs to be expanded with more features. At this stage until I see improvement I won't be committed to it as I once thought.
HPA Illuminator
H P A
C C P Alliance
#29 - 2016-03-11 06:58:53 UTC  |  Edited by: HPA Illuminator
Terminal Insanity wrote:
Yume Mei wrote:

Im pretty sure you selected the correct one... its just gonna take time for people to get good at this, and for the system to weed out the bad ones =p



was i was right in selecting the two categories for this one? Previously i was only selecting Nucleus on samples like this because i felt it was the prominent feature, but now as ive went though some of these and getting more comfortable, im thinking this should be both Nucleus and Cytoplasm?
http://i.imgur.com/iYVeStn.png

I went with Cytoplasm and not Vesicles because the staining in the red part was more blurry, not really defined dots


Im also curious what the difference between an "Abnormal sample" and "cell to cell variation"/"not identifiable" would be


Difficult one. It's really a borderline regarding the cytoplasm. Nucleus, absolutely correct. Cytoplasm... maybe (it's definitely not vesicles, you're correct there). It could just be "background" staining since it's so weak. So, I can understand why it has a 50% ratio :) Personally, I would go for only nucleus, but I have looked at *a lot* of images, so I'd actually say it's better to label everything you think matches.

Abnormal sample: you find a distinct pattern in the cells that doesn't match any of the categories. OR it's a broken image

Cell to cell variation: When you see e.g. a strong nucleus staining in a few cells, but some are really weak. Or some cells stain nucleus and some stain mitochondria (or whatever). When things differ visibly between cells.

Not identifiable: You can't distinguish any pattern, the green is just all over the place in the cell and everything is just the same shade of green.
HPA Illuminator
H P A
C C P Alliance
#30 - 2016-03-11 07:05:29 UTC
Helios Estraella wrote:
Terminal Insanity wrote:


I went with Cytoplasm and not Vesicles because the staining in the red part was more blurry, not really defined dots


Im also curious what the difference between an "Abnormal sample" and "cell to cell variation"/"not identifiable" would be


If you look at all the cytoplasm examples the nuclei are pretty much void of green. In your situation the nucleus is filled with green so I wouldn't categorize this as cytoplasm. But I might as well be wrong.


The examples are only showing 1 location, to make it easier to see what it should look like. However, most images (>60%) will have at least dual (some triple) locations, and nucleus/nucleoplasm + cytoplasm is the most common one by far.

In this one I wouldn't click cytoplasm though, but that's based on that it's so weak, that I don't believe it's a specific staining (but it's borderline, so as a gamer, I think you probably should :))
HPA Illuminator
H P A
C C P Alliance
#31 - 2016-03-11 07:07:25 UTC
Nevyn Auscent wrote:
http://i.imgur.com/tkkrx53.png Try and work that one out for a laugh.


LOL! It's either imaged on top of the cell (= wrong focus), or it's a dying cell. If everything looked like that, it's clearly an "unidentifiable".
HPA Illuminator
H P A
C C P Alliance
#32 - 2016-03-11 07:12:13 UTC
Van Dracon wrote:
The new feature has caused many inconsistent results. Especialy with basic findings where you are 100% accurate but get a failed result. This has baffled me if I should continue wasting my time in project failure.

For starters the pictures should be increased in size instead of looking through a key hole.

Can we get 3d images if possible.

Why did I fail with one of the results. Please give back users more explanatory information to educate them more on failed results instead of guessing.


I like to help like many others though I believe the new feature needs to be expanded with more features. At this stage until I see improvement I won't be committed to it as I once thought.


I think (not sure) that there's a forum thread that's better for giving feedback regarding the UX/UI. I'll ask ppl to have a look in this forum too though.

3D images is not possible atm as we've so far only acquired 2D images. The reason is that taking 3D just takes too much time (atm 1 image takes around 3s by the microscope, just the actual acquisition, if it were to be 3D it would be a minute or so... that times 150k samples = no doable, unfortunately. It would be awesome to have though!).
HPA Illuminator
H P A
C C P Alliance
#33 - 2016-03-11 07:12:41 UTC
Annemariela Antonela wrote:


Woop woop!
Sp3ktr3
Unicorn Rampage
#34 - 2016-03-11 08:01:44 UTC
Annemariela Antonela wrote:


Well I do science all day, and then come home to fly spaceships and blow stuff up and end up doing science again. It never ends!!
Van Dracon
Doomheim
#35 - 2016-03-11 08:56:52 UTC
HPA Illuminator wrote:
Van Dracon wrote:
The new feature has caused many inconsistent results. Especialy with basic findings where you are 100% accurate but get a failed result. This has baffled me if I should continue wasting my time in project failure.

For starters the pictures should be increased in size instead of looking through a key hole.

Can we get 3d images if possible.

Why did I fail with one of the results. Please give back users more explanatory information to educate them more on failed results instead of guessing.


I like to help like many others though I believe the new feature needs to be expanded with more features. At this stage until I see improvement I won't be committed to it as I once thought.


I think (not sure) that there's a forum thread that's better for giving feedback regarding the UX/UI. I'll ask ppl to have a look in this forum too though.

3D images is not possible atm as we've so far only acquired 2D images. The reason is that taking 3D just takes too much time (atm 1 image takes around 3s by the microscope, just the actual acquisition, if it were to be 3D it would be a minute or so... that times 150k samples = no doable, unfortunately. It would be awesome to have though!).


Hey thanks for the reply, Just to tell you something about 3D it would be awesome. Let me explain my views on 2D and 3D. Now for e.g. we have all these green dots supposedly only in the blue area. Ok with 2D you assume there all inside when you look at the photo. But with 3D it can uncover more detail for e.g. at times 3D shots can tell you if all those green dots are floating above the blue area or if they are literally sitting in the blue area. This could be one major factor eliminating errors and being more accurate right ?

Even though it takes 1 minute wouldn't it be more important to be more accurate ?, I would importantly like to be more accurate rather than not especially with this application. I dont want to sound like a smart ass but accuracy is prime. Maybe your software does it already but reports back in 2D i'm not sure maybe. Or is there other applications that can speed up the time for processing images quicker ?. I think over the long run you would have more accurate results if 3D is the latest technology we use today. E.g. the new 3D laser printers.

Anyways thats just my own thoughts.
HPA Illuminator
H P A
C C P Alliance
#36 - 2016-03-11 10:28:11 UTC  |  Edited by: HPA Illuminator
Van Dracon wrote:
HPA Illuminator wrote:
Van Dracon wrote:
The new feature has caused many inconsistent results. Especialy with basic findings where you are 100% accurate but get a failed result. This has baffled me if I should continue wasting my time in project failure.

For starters the pictures should be increased in size instead of looking through a key hole.

Can we get 3d images if possible.

Why did I fail with one of the results. Please give back users more explanatory information to educate them more on failed results instead of guessing.


I like to help like many others though I believe the new feature needs to be expanded with more features. At this stage until I see improvement I won't be committed to it as I once thought.


I think (not sure) that there's a forum thread that's better for giving feedback regarding the UX/UI. I'll ask ppl to have a look in this forum too though.

3D images is not possible atm as we've so far only acquired 2D images. The reason is that taking 3D just takes too much time (atm 1 image takes around 3s by the microscope, just the actual acquisition, if it were to be 3D it would be a minute or so... that times 150k samples = no doable, unfortunately. It would be awesome to have though!).


Hey thanks for the reply, Just to tell you something about 3D it would be awesome. Let me explain my views on 2D and 3D. Now for e.g. we have all these green dots supposedly only in the blue area. Ok with 2D you assume there all inside when you look at the photo. But with 3D it can uncover more detail for e.g. at times 3D shots can tell you if all those green dots are floating above the blue area or if they are literally sitting in the blue area. This could be one major factor eliminating errors and being more accurate right ?

Even though it takes 1 minute wouldn't it be more important to be more accurate ?, I would importantly like to be more accurate rather than not especially with this application. I dont want to sound like a smart ass but accuracy is prime. Maybe your software does it already but reports back in 2D i'm not sure maybe. Or is there other applications that can speed up the time for processing images quicker ?. I think over the long run you would have more accurate results if 3D is the latest technology we use today. E.g. the new 3D laser printers.

Anyways thats just my own thoughts.


Thanks for elaborating, it's interesting to hear what ppl think! I agree that accuracy is really important. NB that each images is of a focal plane/narrow slice of the cell, so for something to be seen from "outside" of the slice, it has to be a reaaaally strong staining, or the image is acquired at the interface between nucleus/cytoplasm.

What we do now is rather than acquiring 3D images, if something can't be visualized perfectly in one image, we will take several images of the same cells, but at different focal planes. That way we can vizualise e.g. focal adhesions in one image, and in the next we will show e.g. nuclear speckles. But yes, it would be ideal to have high-res 3D images to scroll through.

If you see spots in the blue area, and when toggling on/off the red, you see a nice outline of the cell w/o any red in the blue parts - then you know that the image you look at have been acuired "in the middle" of the cell. Thus, the spots in the blue should be there, rather then floating below/on top. Also, the look of the spots can usually tell whether they are in focus or not. If they are big (subjective, I know) and somewhat blurry, they aren't in focus, and you should be hesitant. If more distinct, they are in the same focal plane as the nuclei, and you can trust them.

Technical comment:

No, we acquire 2D images (Leica SP5 confocal microscopes). We have the possibility of acquiring 3D images in the form of stacks, i.e. that the mic first images a slice at the bottom and then works its way up to the top of the cell, slice by slice (the number of slices can be set manually). The reason for using this kind of microscopy is that it gives high-resolution images, and we can look at a certain focal plane at the time (compared to a microscope that images the whole cell at the time, but at much worse resolution). High-res images are really necessary to see substructures such as e.g. the centrosome or fibrillar center. Each focal plane (slice) is very narrow, so (as I mentioned above) for something to be seen from below/above, the signal from it either has to be really strong, or the slice has to be in close proximity to e.g. the interface between the nucleus and cytoplasm.

The reason that it's so time consuming to do image acquisition is that the microscope scans with 1 laser over the sample, then the 2nd laser and finally the 3rd (exciting the fluorophores in the sample at different wavelengths. We have to excite separately to avoid bleedthrough between different channels, eg that staining in the blue is seen in green etc).

Also, the resolution in z will not be as good as the one in xy 2D (general thing due to some laws of physics/how light move through different media or smt that I would have to refresh my memories on before commenting on), which means that even if we do 3D images, they won't be perfect.

Sorry for nerding out on microscopes :)
Van Dracon
Doomheim
#37 - 2016-03-11 11:27:57 UTC

I really enjoyed reading your reply. Didn't realize how many different cameras there are and how much light can or can't get through. I understand about the slice, i guess that is very accurate in some way. Well speaking on behalf of me and probably many others i think it's going to take some time to get use to. I don't come from a scientist background. Worked in I.T. for some years and was just looking at it from an application point of view.

I spent many years in software automation. Would be nice to have recognition filters so instead of you selecting what images come close to the examined photo you could possibly setup filter options for it instead. E.g. if you see more than 5 green dots in the blue area have that selected instead of you selecting it on the right hand side of the pane. I guess since the photos are static we can't setup such filters cause a photo is a photo.

Because you work with colors is there a way we can calculate how much red/ green, blue light is in the photo. Why am i asking this ?. Again having filters setup based on light ratios based on %. So if there 60% green light in the photo which is it likely to be or be as close the 3 category on the right side of the pane.

Again why am i talking about colors, there are so many results not sure for e.g. 5 or less green dots in blue area have same green light frequency light as the others that full in the same basket. Or is most of this color thing I am talking about irrelevant.

Just my thoughts.

Once again thanks for sharing
HPA Illuminator
H P A
C C P Alliance
#38 - 2016-03-11 13:00:26 UTC
Van Dracon wrote:

I really enjoyed reading your reply. Didn't realize how many different cameras there are and how much light can or can't get through. I understand about the slice, i guess that is very accurate in some way. Well speaking on behalf of me and probably many others i think it's going to take some time to get use to. I don't come from a scientist background. Worked in I.T. for some years and was just looking at it from an application point of view.

I spent many years in software automation. Would be nice to have recognition filters so instead of you selecting what images come close to the examined photo you could possibly setup filter options for it instead. E.g. if you see more than 5 green dots in the blue area have that selected instead of you selecting it on the right hand side of the pane. I guess since the photos are static we can't setup such filters cause a photo is a photo.

Because you work with colors is there a way we can calculate how much red/ green, blue light is in the photo. Why am i asking this ?. Again having filters setup based on light ratios based on %. So if there 60% green light in the photo which is it likely to be or be as close the 3 category on the right side of the pane.

Again why am i talking about colors, there are so many results not sure for e.g. 5 or less green dots in blue area have same green light frequency light as the others that full in the same basket. Or is most of this color thing I am talking about irrelevant.

Just my thoughts.

Once again thanks for sharing


I'll actually ask my colleague HPA_Dichroic to give some input on this, as he's working with image analysis and could comment so much better.

Thanks for discussing & coming with ideas, love the interest from everyone!
HPA Dichroic
H P A
C C P Alliance
#39 - 2016-03-11 16:02:18 UTC
HPA Illuminator wrote:
Van Dracon wrote:

I really enjoyed reading your reply. Didn't realize how many different cameras there are and how much light can or can't get through. I understand about the slice, i guess that is very accurate in some way. Well speaking on behalf of me and probably many others i think it's going to take some time to get use to. I don't come from a scientist background. Worked in I.T. for some years and was just looking at it from an application point of view.

I spent many years in software automation. Would be nice to have recognition filters so instead of you selecting what images come close to the examined photo you could possibly setup filter options for it instead. E.g. if you see more than 5 green dots in the blue area have that selected instead of you selecting it on the right hand side of the pane. I guess since the photos are static we can't setup such filters cause a photo is a photo.

Because you work with colors is there a way we can calculate how much red/ green, blue light is in the photo. Why am i asking this ?. Again having filters setup based on light ratios based on %. So if there 60% green light in the photo which is it likely to be or be as close the 3 category on the right side of the pane.

Again why am i talking about colors, there are so many results not sure for e.g. 5 or less green dots in blue area have same green light frequency light as the others that full in the same basket. Or is most of this color thing I am talking about irrelevant.

Just my thoughts.

Once again thanks for sharing


I'll actually ask my colleague HPA_Dichroic to give some input on this, as he's working with image analysis and could comment so much better.

Thanks for discussing & coming with ideas, love the interest from everyone!


Hey Van Dracon, I'm not 100% sure what you're talking about here with the recognition filters. Do you mean have some sort of semi-automated classification? We could certainly provide a suggested classification based on some basic image features, but where would the fun in that be Blink

I am working on implementing a fully automated system for this task with my student that will use neural networks to recognize these sub-cellular patterns. Ideally this system will answer most of the images and give a confidence score so that in the future we only have to review the least confident classifications. The system is supervised though so needs lots of quality training data which is where you come in.

Maybe you can explain more about your idea and I'll try to understand it better Lol
Helios Anduath
The Scope
Gallente Federation
#40 - 2016-03-11 16:27:19 UTC
A quick question on the cell-to-cell variation option, how much does it play into the weighting of your accuracy change if you tick cell-to-cell variation but others haven't, or if you don't tick it and others have? Is it treated just like the other answers?

The reason I am asking is that a lot of the time, this box doesn't seem to be being ticked by the majority for some obvious variation so the consensus has it at 0. I guess this is also even more subjective than the other classifications as it comes down to how much variation justifies ticking the option.