<rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:media="http://search.yahoo.com/mrss/"
	
	>

<channel>
	<title>Objects Perceive Me</title>
	<link>https://objectsperceiveme.online</link>
	<description>Objects Perceive Me</description>
	<pubDate>Tue, 12 Mar 2024 21:14:38 +0000</pubDate>
	<generator>https://objectsperceiveme.online</generator>
	<language>en</language>
	
		
	<item>
		<title>Top</title>
				
		<link>https://objectsperceiveme.online/Top</link>

		<pubDate>Fri, 08 Mar 2024 21:25:02 +0000</pubDate>

		<dc:creator>Objects Perceive Me</dc:creator>

		<guid isPermaLink="true">https://objectsperceiveme.online/Top</guid>

		<description>

&#60;img width="1650" height="1275" width_o="1650" height_o="1275" data-src="https://freight.cargo.site/t/original/i/2aa79c2e78c5273211695a9e9ae8c5a0578959a280e3af5beb26b2498da7878f/triangle-of-meaning.jpg" data-mid="206656829" border="0"  src="https://freight.cargo.site/w/1000/i/2aa79c2e78c5273211695a9e9ae8c5a0578959a280e3af5beb26b2498da7878f/triangle-of-meaning.jpg" /&#62;
	
	For an image to be, does an agent have to observe and process it?&#38;nbsp;

	


</description>
		
	</item>
		
		
	<item>
		<title>An Intro</title>
				
		<link>https://objectsperceiveme.online/An-Intro</link>

		<pubDate>Fri, 08 Mar 2024 21:25:03 +0000</pubDate>

		<dc:creator>Objects Perceive Me</dc:creator>

		<guid isPermaLink="true">https://objectsperceiveme.online/An-Intro</guid>

		<description>Computational Images&#38;nbsp;
The Specter of Representation
&#60;img width="30" height="30" width_o="30" height_o="30" data-src="https://freight.cargo.site/t/original/i/d1de21b9d3e16e3a82811d70fd62bc2e093452c2d133fc5043b5bcf374736450/eyeballs.svg" data-mid="206893517" border="0" alt="#eye" data-caption="#eye" src="https://freight.cargo.site/w/30/i/d1de21b9d3e16e3a82811d70fd62bc2e093452c2d133fc5043b5bcf374736450/eyeballs.svg" /&#62;&#38;nbsp;&#60;img width="30" height="30" width_o="30" height_o="30" data-src="https://freight.cargo.site/t/original/i/5bf22e786a2a4e2cde4569b4e8837a453b8501a2c4984b37230663c7d3412497/eyeballs-1.svg" data-mid="206893518" border="0" alt="#eye" data-caption="#eye" src="https://freight.cargo.site/w/30/i/5bf22e786a2a4e2cde4569b4e8837a453b8501a2c4984b37230663c7d3412497/eyeballs-1.svg" /&#62;




	
	
    
“Everything we see hides
another thing, we always want to see what is hidden by what we see. There is an
interest in that which is hidden and which the visible does not show us. This
interest can take the form of a quite intense feeling, a sort of conflict, one
might say, between the visible that is hidden and the visible that is present.”
René
Magritte, interview response to his self-portrait painting Son of Man (1964)


	


	
	&#60;img width="1130" height="662" width_o="1130" height_o="662" data-src="https://freight.cargo.site/t/original/i/5902148e08b755192ac72b3948bb747b1e6bebab716a2dd7f09b16dfd7d749ce/Screenshot-2024-03-22-at-4.02.34-PM.png" data-mid="207378156" border="0"  src="https://freight.cargo.site/w/1000/i/5902148e08b755192ac72b3948bb747b1e6bebab716a2dd7f09b16dfd7d749ce/Screenshot-2024-03-22-at-4.02.34-PM.png" /&#62;Centre for the Study of the Networked Image, G. Cox, A. Dekker, A. Dewdney, and K. Sluis. “Affordances of the Networked Image”. The Nordic Journal of Aesthetics 2021.&#38;nbsp;Link.

The processes of computation and automation that produce digitized images have displaced the concept of an image once conceived through optical devices such as a photographic plate or a camera mirror that were invented to accommodate the human eye. Computational images exist as information within networks mediated by coded machines. They are increasingly less about what art history understands as representation or photography considers indexing and more an operational product of data processing determined by numerical information. Within this new reality, artificial intelligence (AI) applications are rapidly burgeoning as dominant sources of image production. What becomes of a visual world mediated first by data points from a specific training set expressed through tokens, pixels, text? 
In this performative website, an extension of my PhD project, I take images as objects to help me think about the philosophy of
computation. My account includes a history that is not intended to be exhaustive in the
way a historian might undertake, but rather to serve as a theoretical framework that
problematizes the political, social, and epistemic causes and effects of computation on
the concepts of representation and truth. In the pursuit of such a multidisciplinary
analysis, my approach melds theory with practice to serve as investigation of the past,
critique of the present, and radical speculation for futurity. If this approach retains
anything from the history of philosophy, it is the spirit of askēsis, an exercise in knowing
and becoming myself in the activity of thought. The double bind of critically analyzing
representation requires the accounting of oneself in the act, thus the process is always
both reflective and self-reflective at the same time. In this context, my analysis takes the
form of a neural network, linking ideas, histories, names, and objects in multiple
dimensions that can be read in multiple ways. Here, the speculative character of this
project is intended to function like a database, affording the reader the chance to draw
their own connections and to provoke the forming of new lines for what is possible. 
link to dissertation: here


	


	
	interstices
of AI 1,
2023. Video made by interpolating between digital photographs taken by me, using Runway ML’s frame interpolation tool. Track&#38;nbsp;Chahargah I, Op.75 by Ata Ebtekar
	
</description>
		
	</item>
		
		
	<item>
		<title>What is an image? Cover</title>
				
		<link>https://objectsperceiveme.online/What-is-an-image-Cover</link>

		<pubDate>Sat, 09 Mar 2024 00:08:58 +0000</pubDate>

		<dc:creator>Objects Perceive Me</dc:creator>

		<guid isPermaLink="true">https://objectsperceiveme.online/What-is-an-image-Cover</guid>

		<description>&#60;img width="6000" height="3376" width_o="6000" height_o="3376" data-src="https://freight.cargo.site/t/original/i/8d7da517df2fcafa36cea6a8b01af1c699ddea240c722eb7434d2b379bdccb71/europe23-56.jpg" data-mid="206519976" border="0"  src="https://freight.cargo.site/w/1000/i/8d7da517df2fcafa36cea6a8b01af1c699ddea240c722eb7434d2b379bdccb71/europe23-56.jpg" /&#62;
	
	What is an image?
	
</description>
		
	</item>
		
		
	<item>
		<title>What is an Image? An intro</title>
				
		<link>https://objectsperceiveme.online/What-is-an-Image-An-intro</link>

		<pubDate>Sat, 09 Mar 2024 00:33:10 +0000</pubDate>

		<dc:creator>Objects Perceive Me</dc:creator>

		<guid isPermaLink="true">https://objectsperceiveme.online/What-is-an-Image-An-intro</guid>

		<description>
	
	



    


Post-Image
On a specific theory of images, I follow
W.J.T
Mitchell in distinguishing between picture&#38;nbsp;and image. Where pictures are concrete objects, images are virtual and
phenomenal appearances presented to a beholder through objects. “To picture” is
a deliberate act of visual representation, whereas “to image or imagine” is
more elusory, general, and spontaneous.[1]&#38;nbsp;Pictures–and photographs–can be taken, while images are made. I think about what
to make of images that are made today, co-constructed by sets of computational logic
that test the limits of human representation. 



Representation is insufficient as a concept to
explain instances when images are made and distributed between machines with either subperceptual or little to no human intervention. Here, I identify the capacity of art to
transfigure (transmogrify, transduce) the illegibility of computation and AI
into new pathways for experiencing the world by material investigations that gesture
towards the possibilities of difference. The open indeterminacy of computation
allows for an opportunity to decenter normative ideas of what is defined and counted
as human.&#38;nbsp;
Firmly in an epoch of algorithmic culture, where computational agency, intelligence, and creativity are legitimate ideas to ponder, I also want to think about its history and its politico-epistemic effects on images. If we can fight to make this
new world fairer and more available, wrestled away from its racialized
techno-capital-military influences, it is a fight worth having for a future
whose cosmology we can start creating today.
















The relinquishing of the primacy of the human
eye and the acknowledgment of the failures of human exceptionalism allow us to
experience the world in new and deeper ways. Like symbiotes or cyborgs, we have
adopted new epistemic instruments that produce entirely new worlds in
collaboration with computing intelligence.















The
computational sublime is a particular concept developed as a potential escape
from automated surveillance culture.







At its most insidious, I argue that the complete cognitive offloading of imaging and sight to computation (the statistical gaze) leads to second order social consequences that
intensify sensory-overload (chaos or arbitrariness or unknowability) and enable
abuses of power. By second order I mean consequences that are not the direct
goal of technological development but nevertheless part of its outcomes. Two
specific consequences I identify are: 1) the ways in which human correspondence
with social reality is obfuscated and homogenized by narrow applications of
computation, and 2) how the ubiquity of surveillance as an outgrowth of
computation is changing the form of power dynamics.&#38;nbsp;


My unit of analysis follows the development of
computation and is thus not reducible to one society or individual, although I
focus mostly on its Western origin story and implications. When I consider social
relations, I mean the interactions among organisms that live and commune
together. Human social relations are increasingly complexified by myriad
variables that affect how we communicate, think, exchange, and live. I consider
the effects of computation within this broader framework of the social, which for
me also encompasses culture.“Firmly in an epoch of network culture, where computational agency, intelligence, and creativity are legitimate ideas to ponder, I also want to think about its history and its politico-epistemic effects on images.”



My argument implies that computation, as a new
form of mediating the world, enables a deluge of opaque image production that
challenges how we can know or make sense of things. 















The
human optical system becomes one part of a larger loop of information
processing.



For example, an analysis
estimated that from 2022 to 2023 alone, Artificial Intelligence (AI) was used
to produce 15 billion images, a figure that took photography 150 years to reach
(circa 1826 until 1975).[2] The argument also implies that beyond simply facilitating this torrent,
computation is responsible for the development of visual surveillance tools
that enable new ways of monitoring, measuring, predicting, commodifying, and
controlling individuals and large groups of people. &#38;nbsp; &#38;nbsp;&#38;nbsp;


















Jacques Rancière’s concept of the distribution of the
sensible defines politics in a way that includes what Michel Foucault called the
order of things, or what is symbolically representable at any given place and
time. When combining the two concepts, the struggle over the methods of sensing
and the process of perception establishes a relation I call political. The distribution of the sensible contains the ethical,
intellectual, and political as aesthetic experiences. The aesthetic here is not
about the judgment of beauty but rather the relationship between sense
perception, embodiment, meaning, and social relations. 





















Rather than a critique, my method of analysis
is more akin to what Eyal Weizman and Mathew Fuller define as investigative
aesthetics. The practice of investigative aesthetics creates observable composites
using various signals that include forensic, technical, material, cultural,
political, and ethical evidence. Various online and offline methodologies are
combined together in transdisciplinary or antidisciplinary work to render the
causes of an event or the existence of an object visible. 



















Practitioners deploy computational methods within legal,
forensic, artistic, and critical frameworks. Investigative aesthetics takes seriously the
material conditions through which events occur and attempts to create a public
alternative to facts presented by power-holding actors. In so doing, it also
points to longer historical processes that shape events and outcomes in the
present, what I think of as the archaeology of an event. Weizman and Fuller
leverage the technical in the spirit of producing a commons for knowledge that
resembles a multisensory, navigable architectural model. To experience and make
sense of the world and to feel spurred on to imagine it differently are
aesthetic experiences and require investigative practices.“In so doing, it also
points to longer historical processes that shape events and outcomes in the
present, what I think of as the archaeology of an event.”


This method expands the aesthetic and logical
limitations of a single human subject, and thus expands a collective’s ability
to sense and reason. I loosely follow their concept of investigative aesthetics
throughout in attempting to agnostically make sense of computational logics and
practices while retaining an ethical commitment to harm reduction and the
principle of freedom. My methodology includes producing an archive, constructing
a theoretical framework, empirically testing through technical and artistic
practices, and relying on transdisciplinary research and pedagogy. The goal of my project is to create an ongoing framework for understanding and creating computational images that is open to new information, new case studies, and new
practices (evidence).









[1] Mitchell,
W., Representation, in F Lentricchia &#38;amp; T McLaughlin
(eds), Critical Terms for Literary Study, 2nd edn, University of Chicago
Press, Chicago, 1995, 4.

[2] Every Pixel Journal 
	
</description>
		
	</item>
		
		
	<item>
		<title>Recursivity</title>
				
		<link>https://objectsperceiveme.online/Recursivity</link>

		<pubDate>Mon, 11 Mar 2024 21:11:52 +0000</pubDate>

		<dc:creator>Objects Perceive Me</dc:creator>

		<guid isPermaLink="true">https://objectsperceiveme.online/Recursivity</guid>

		<description>&#60;img width="6000" height="3375" width_o="6000" height_o="3375" data-src="https://freight.cargo.site/t/original/i/b8a4916f0eca194471a530461bcdd45cdbeb5674afaf6f2a486e15f7f1ba3492/europe23-516-1.jpg" data-mid="206537990" border="0"  src="https://freight.cargo.site/w/1000/i/b8a4916f0eca194471a530461bcdd45cdbeb5674afaf6f2a486e15f7f1ba3492/europe23-516-1.jpg" /&#62;What gazes back?
 </description>
		
	</item>
		
		
	<item>
		<title>Statistical Gaze</title>
				
		<link>https://objectsperceiveme.online/Statistical-Gaze</link>

		<pubDate>Mon, 11 Mar 2024 21:11:58 +0000</pubDate>

		<dc:creator>Objects Perceive Me</dc:creator>

		<guid isPermaLink="true">https://objectsperceiveme.online/Statistical-Gaze</guid>

		<description>
	
	Computational Representation or the Statistical
Gaze 




“I think the
style would be a bit whimsical and abstract and weird, and it tends to blend
things in ways you might not ask, in ways that are surprising and beautiful. It
tends to use a lot of blues and oranges. It has some favorite colors and some
favorite faces. If you give it a really vague instruction, it has to go to its
favorites. So, we don’t know why it happens, but there’s a particular woman’s
face it likes to draw — we don’t know where it comes from, from one of our 12
training datasets — but people just call it “Miss Journey.” And there’s
one dude’s face, which is kind of square and imposing, and he also shows up
some time, but he doesn’t have a name yet. But it’s like an artist who has
their own faces and colors.”&#38;nbsp;


David Holz, Midjourney founder, interview with The Verge (2022)


For
us humans, the computation involved in generative AI is catalyzing a
significant change in the processual truth of what an image is. The web of
computational operations in this new production process forces us to re-think
what the art history and photography canons call representation. Once conceived
through optical concepts and materials such as a vanishing point, a
photographic plate, or a camera mirror that were invented to accommodate the
human eye, computation requires images to be processed as digitized data, or
numerical information.
 
In
this visual investigation, I wanted to see what would happen when a
generative AI model is set off on a recursive loop in which its own outputs are
iteratively fed back to it as inputs. My hunch, or hypothesis, suggested that the
statistical operations of Midjourney would prompt the model to converge to the
most probable averages of its dataset when left unattended by human
intervention. In this experiment, I wanted to make experienceable in an
exaggerated way what could become of image production if it is
increasingly automated to produce what is most probable on the Internet.
 The
increasing presence of AI-generated images is a phenomenon that extends to mobile
photography, the metaverse, and scientific observation. Content on the Internet
will soon become an AI-majority artifact, which means future datasets used to
train newer AI models will rely on synthetic data, creating a closed feedback
system that can intensify initial conditions and biases. Researchers have
already observed this process in AI-generated natural language experiments. In
one experiment, they call this effect model collapse.[1] In the published paper, they include statistical evidence suggesting that
recursion with AI-generated data creates a homogenization in outputs that
increasingly forgets the tails of its distribution curve. In other words, outliers
in the training data become lost as the model reinforces what was originally overrepresented,
leading to increasing convergence and more errors. This recursive effect is a
slippery slope that poses one of the more troubling aspects of automation I
wanted to explore: “biased slop.”&#38;nbsp;&#38;nbsp; &#38;nbsp; &#38;nbsp; &#38;nbsp; &#38;nbsp; &#38;nbsp; 
I
designed a test for this by manually setting up a recursive process on
Midjourney. I fed the model’s visual outputs back in as its inputs over a
series of iterations to gauge how the initial image might change and converge
formally when left reproducing without my textual prompting. Midjourney offers
the ability to prompt the model with a pair of images rather than words, or a
combination of images and words. In the case of the former, the company states
on its website and Discord channel that the model “looks at the concepts and
aesthetics of each image and merges them into a novel new image.” Just how
concepts or aesthetics are defined by the model can’t really be known, although
learning how to steer it towards desired outcomes has created a market for what
the industry calls prompt engineering.








[1] https://arxiv.org/abs/2305.17493













	


	
	
















meta-diffusion
1, (2023). Initial reference image: Heydar Aliyev Centre by Zaha Hadid. Initial
text prompt: “an architectural structure in the shape of a tesseract in the
middle of a contemporary Middle Eastern city.” All images produced with equal
weights, default settings, v 5.1, medium stylized.























meta-diffusion
2, (2023). No initial reference image. Initial
input prompt: “an architectural structure in the shape of a tesseract in the
middle of a contemporary Middle Eastern city.” All images produced with equal
weights, default settings, v 5.1, medium stylized.


meta-diffusion
3, (2023).&#38;nbsp; Initial input prompt: “a beautiful woman
in a headscarf posing for a photograph.” Did not use grids; selected face I
believed to be “darker” out of the first mostly white outputs. All images
produced with equal weights, default settings, v 5.1, medium stylized.

	
</description>
		
	</item>
		
		
	<item>
		<title>Imaginary Apparel</title>
				
		<link>https://objectsperceiveme.online/Imaginary-Apparel-1</link>

		<pubDate>Tue, 12 Mar 2024 20:47:15 +0000</pubDate>

		<dc:creator>Objects Perceive Me</dc:creator>

		<guid isPermaLink="true">https://objectsperceiveme.online/Imaginary-Apparel-1</guid>

		<description>
A New Kind Of SUBLIME?</description>
		
	</item>
		
		
	<item>
		<title>Imaginary Apparel</title>
				
		<link>https://objectsperceiveme.online/Imaginary-Apparel</link>

		<pubDate>Tue, 12 Mar 2024 21:14:38 +0000</pubDate>

		<dc:creator>Objects Perceive Me</dc:creator>

		<guid isPermaLink="true">https://objectsperceiveme.online/Imaginary-Apparel</guid>

		<description>
	
	Creative Infringement
In 2019, computation
was used to compile many terabytes of data from the Event Horizon Telescope
(EHT) to produce the first (synthetic) image of a black hole roughly 55 million
light-years away in the center of the Messier 87 (M87) galaxy. In 2022, Sagittarius A was imaged at the center of our own Milky Way by the
Event Horizon Telescope project. These achievements, and the subsequent images
produced by the James Webb Telescope, mark a seminal moment in the history of
images. 



The first
visualization of a black hole required a synchronized array of radio telescopes
located across the globe, turning the world’s surface into a sort of giant planetary
sensor–a theoretical aperture the size of the Earth. The web of telescopes,
which are actually radio dishes, produce high fidelity information through an interferometric process called Very Long Baseline
Interferometry (VLBI) that combines their individual measurements of wave
interference. To model M87’s appearance, EHT ran simulations and used ray
tracing to describe the gas and plasma surrounding the black hole that were
parametrized with its spin and temperature values. Ray tracing describes the
computational reproduction of optical effects such as light, shadows, and
depth.


















The massive
amounts of information recorded over ten days of observation provided data that
took two years to compute and process into a verifiable and reproducible image.
Scholars such as Shane Denson point to the micro-temporal speed involved in mediating
everyday computational images, but in the case of the black hole the process is
macro-temporal, stretching over days and years.[1]&#38;nbsp;This scale, both spatially and temporally, is a new capability that affords a
new kind of image. 



What we
see, the orange circle amid a black void, is the light from the accretion disk
around the shadow of the mass that is the black hole. The color choices we see are
in fact arbitrarily chosen by scientists and correspond to the temperature and
wave frequencies observed in the magnetic fields near the event horizon of the
black hole at the center of M87. The initial blurry image has been subsequently
sharpened with new algorithms and upsampling techniques and re-published. 



The
collaboration brought together institutions and astronomers from all over the
world seeking to push the observation of quasars and black holes beyond the
limits holding science back: providing a deeper understanding of space, time,
and gravity fundamental to understanding the universe. The limits of knowledge and the terrifying yet affectively
pleasurable feeling of confronting it through an aesthetic experience is the
sublime I want to think through here. The profundity
of this astronomical image, and what it verifies and confers, prompts the question
of whether it stands for a new form of the sublime.
In The
Critique of the Power of Judgement, Kant analyzes the conditions that
enable what he calls reflective judgements of taste, most notably those
accompanying the experience of the beautiful and the sublime. Formed without the
logical concepts he claimed were prerequisites for understanding and cognition,
judgements of taste are closer to the imagination: they arise freely and evoke
delight without ends to justify their existence. There are no proofs to
validate these reflective judgements. Different than the singular preferences of
an individual that he calls agreeable, judgements of the beautiful and the sublime
assume a universal validity, although they are subjectively felt, which bonds
all humans in what Kant calls the sensus communis of taste. 



The sublime
transcends the limitations of expression and representation while inducing both
pleasure and terror. Regarding the limits of reason referenced in the epigraph
to this chapter, the sublime allows the mind to recognize its own disposition
when estimating the external world. When confronted by the infinity of the
sublime, we experience an aesthetic recognition of our own finitude.&#38;nbsp; 



For Kant,
the sublime is part of his larger critical and moral project to define freedom
within the edges of human reason and aesthetic experience. I want to stay with
this principal, and indeed with this definition of the sublime, yet also point
to Kant’s teleology of judgment as a shortcoming that effaces the indeterminacy
of experience–or the contingency of experience in the world. 



Fred Moten
writes about Black aesthetics as a manifestation of indeterminacy and freedom
from within unfreedom–in the break, in the cut, in the blur. In reference to
Miles Davis’ kinetic musical improvisation and the words of Samuel Delaney
alluding to Cecil Taylor and Amiri Baraka he summons the sublime as “that which
is experienced as a kind of temporal distancing and the out interinanimation of
disconnection…”[2] In
words written about the digital art of American Artist, Moten’s lyrical
exposition is worth quoting at length:



American
Artist rigorously understands that this force and power, in spite of all
rhetoric regarding freedom of the imagination under liberalism, which the
artist is supposed to embody, has most often, and for most people, been
carceral and regulative. In this regard, (black) art has never simply been a
place one goes to get free; it is, rather, an experimental constraint one
enters, at one’s happily necessary peril, in order to test and break freedom’s
limits.[3]



It is this temporal distancing
and disconnection at the edge of reason that captures the power of the (B)lack
hole image. 



Computation,
including AI, introduces the potential to make an image that confounds our
sense of representation through the indeterminacy of its making. In the words
of Parisi, “the medium is given the task of transducing the unknown.”[4] Following Sylvia Wynter’s articulation
of Black women as representative of chaos, or the outside of reason set against
the universality of the Western Man central to the history of science, a
conceptually fugitive form of the computational suggests one path away from the
instrumentality of dominant technological solutionism.[5] 







[1] Denson, Shane. Discorrelated
Images. Duke University Press, 2020.

[2] Moten, Fred. In
The Break: The Aesthetics Of The Black Radical Tradition. United
States,&#38;nbsp;University of Minnesota Press,&#38;nbsp;2003. 155.







[3] Moten, Fred. “American
Artist.” Cura Magazine 38. 2022.







[4]&#38;nbsp;Parisi, Luciana.
“The Negative Aesthetic of AI.” Digital Aesthetics Workshop. 2023, Stanford
Humanities Center, Stanford Humanities Center. 







[5] Wynter, Sylvia.
1984. “The Ceremony Must Be Found: After Humanism.” boundary 2 12/13, no. 3/1:
19–70.



















&#60;img width="3500" height="2187" width_o="3500" height_o="2187" data-src="https://freight.cargo.site/t/original/i/5eb26d892c120215c1acfacf87f5ce377eccec2fcc9c3b986f22a7ca5f47c974/aitor-throup-selected-images-for-at.jpg" data-mid="206640826" border="0"  src="https://freight.cargo.site/w/1000/i/5eb26d892c120215c1acfacf87f5ce377eccec2fcc9c3b986f22a7ca5f47c974/aitor-throup-selected-images-for-at.jpg" /&#62;
 Aitor Throup is a multisdciplinary designer and artist. The image above is from Throup’s “New Object Research” apparel catalogue from 2013. Read a profile and interview with Throup I conducted here.&#38;nbsp;

&#60;img width="1024" height="1024" width_o="1024" height_o="1024" data-src="https://freight.cargo.site/t/original/i/0d91250e089d0d56806955cef5933910dc6d364b8348b37b2b062008731d0320/samineshadow_An_editorial_image_of_a_young_black_male_model_wea_f2633ef3-b6b1-4960-936a-b5117aebb95a.png" data-mid="206640824" border="0"  src="https://freight.cargo.site/w/1000/i/0d91250e089d0d56806955cef5933910dc6d364b8348b37b2b062008731d0320/samineshadow_An_editorial_image_of_a_young_black_male_model_wea_f2633ef3-b6b1-4960-936a-b5117aebb95a.png" /&#62;An artificial image. Midjourney V6 Prompt: “An editorial image of a young black male model wearing draped contemporary clothing with baggy slacks.” This prompt was blended with the first image above by Aitor Throup using /blend feature.
&#60;img width="1024" height="1024" width_o="1024" height_o="1024" data-src="https://freight.cargo.site/t/original/i/b8ea9f55e6723280076f020f33879cb39ee4d53a76b55faaf49ca73aba511583/samineshadow_An_editorial_image_of_a_young_black_male_model_wea_d58bf163-4179-4b8a-8636-f4e323c9024b.png" data-mid="206640822" border="0"  src="https://freight.cargo.site/w/1000/i/b8ea9f55e6723280076f020f33879cb39ee4d53a76b55faaf49ca73aba511583/samineshadow_An_editorial_image_of_a_young_black_male_model_wea_d58bf163-4179-4b8a-8636-f4e323c9024b.png" /&#62;An artificial image. Midjourney V6 Prompt: “An editorial image of a young black male model wearing draped contemporary clothing with baggy slacks.” This prompt was blended with the first image above by Aitor Throup using /blend feature.
&#60;img width="1024" height="1024" width_o="1024" height_o="1024" data-src="https://freight.cargo.site/t/original/i/79c8a696ea64c76c5742218e282465e1e19b131150d62923e42c07f98d92fd86/samineshadow_An_editorial_image_of_a_young_black_male_model_wea_2737d680-7b42-4dd6-a690-52fbd647d22c.png" data-mid="206640823" border="0"  src="https://freight.cargo.site/w/1000/i/79c8a696ea64c76c5742218e282465e1e19b131150d62923e42c07f98d92fd86/samineshadow_An_editorial_image_of_a_young_black_male_model_wea_2737d680-7b42-4dd6-a690-52fbd647d22c.png" /&#62;An artificial image. Midjourney V6 Prompt: “An editorial image of a young black male model wearing draped contemporary clothing with baggy slacks.” This prompt was blended with the first image above by Aitor Throup using /blend feature.
&#60;img width="1024" height="1024" width_o="1024" height_o="1024" data-src="https://freight.cargo.site/t/original/i/f875490f08825f6edfd76f8e5b992487042212ed3f91c88e01b3952a1a70f826/samineshadow_An_editorial_image_of_a_young_black_male_model_wea_e58aa394-a5ec-4757-a4e0-e3b5c8e904bb.png" data-mid="206640825" border="0"  src="https://freight.cargo.site/w/1000/i/f875490f08825f6edfd76f8e5b992487042212ed3f91c88e01b3952a1a70f826/samineshadow_An_editorial_image_of_a_young_black_male_model_wea_e58aa394-a5ec-4757-a4e0-e3b5c8e904bb.png" /&#62;An artificial image. Midjourney V6 Prompt: “An editorial image of a young black male model wearing draped contemporary clothing with baggy slacks.” This prompt was blended with the first image above by Aitor Throup using /blend feature.

	

</description>
		
	</item>
		
		
	<item>
		<title>Anatomy of an Image</title>
				
		<link>https://objectsperceiveme.online/Anatomy-of-an-Image</link>

		<pubDate>Mon, 11 Mar 2024 18:48:10 +0000</pubDate>

		<dc:creator>Objects Perceive Me</dc:creator>

		<guid isPermaLink="true">https://objectsperceiveme.online/Anatomy-of-an-Image</guid>

		<description>&#60;img width="970" height="683" width_o="970" height_o="683" data-src="https://freight.cargo.site/t/original/i/895d481d58fc7d52c0772fdbdefbcb2a9d968eb69bf1459794500b814e437441/Phasesofthemoon.jpg" data-mid="206541260" border="0" data-scale="94" src="https://freight.cargo.site/w/970/i/895d481d58fc7d52c0772fdbdefbcb2a9d968eb69bf1459794500b814e437441/Phasesofthemoon.jpg" /&#62;
Where does vision begin to see?</description>
		
	</item>
		
		
	<item>
		<title>Anatomy of an Image</title>
				
		<link>https://objectsperceiveme.online/Anatomy-of-an-Image-1</link>

		<pubDate>Mon, 11 Mar 2024 18:43:35 +0000</pubDate>

		<dc:creator>Objects Perceive Me</dc:creator>

		<guid isPermaLink="true">https://objectsperceiveme.online/Anatomy-of-an-Image-1</guid>

		<description>

	
	Anatomy of an Image&#38;nbsp;

What becomes of photography in a post-photographic epoch?

&#38;nbsp;
&#60;img width="6000" height="3376" width_o="6000" height_o="3376" data-src="https://freight.cargo.site/t/original/i/93d49c72001a4f9a777e2515f5146704024529a4d4f753f48a584a36adba289e/europe23-407-2.jpg" data-mid="206522019" border="0"  src="https://freight.cargo.site/w/1000/i/93d49c72001a4f9a777e2515f5146704024529a4d4f753f48a584a36adba289e/europe23-407-2.jpg" /&#62;
“Original” photograph (above), shows birds and people walking during daylight through a Barcelona city square. Taken on a Fujifilm XT20 mirrorless digital camera. Settings: Focal length and lens: 55mm with a 1/4 light diffusion filter, F Number: f/22 Exposure: 1/8 seconds ISO 200. Largely unedited output.

&#60;img width="3024" height="4032" width_o="3024" height_o="4032" data-src="https://freight.cargo.site/t/original/i/c17339bd881eb9e2b24f6516ccea268789a88ead2b77e374e3774fdae154a53c/IMG_2894.jpg" data-mid="206523887" border="0"  src="https://freight.cargo.site/w/1000/i/c17339bd881eb9e2b24f6516ccea268789a88ead2b77e374e3774fdae154a53c/IMG_2894.jpg" /&#62;Proof print of original.
Image to Video via Runway ML with prompt “Birds Flying Through Public Square”


A series of photographs from the original scene stiched together using Runway ML’s frame interpolation tool. Track Two by Corre.

&#60;img width="2048" height="1152" width_o="2048" height_o="1152" data-src="https://freight.cargo.site/t/original/i/0669e4b3af6d404da3f99a0698e243db822a9039b5fe5234623a87b9b00ec53a/B0FA1D26-E699-4119-9B20-3884D9B04037-71810-00000B93DC80E980.JPG" data-mid="206523805" border="0"  src="https://freight.cargo.site/w/1000/i/0669e4b3af6d404da3f99a0698e243db822a9039b5fe5234623a87b9b00ec53a/B0FA1D26-E699-4119-9B20-3884D9B04037-71810-00000B93DC80E980.JPG" /&#62;
Photographic CMYK 33x66 inch large print on thick, matte paper, held on wall by clips.&#38;nbsp;


&#60;img width="2750" height="1656" width_o="2750" height_o="1656" data-src="https://freight.cargo.site/t/original/i/ec77ee17ceb74024fe876f3c32e988c7140e4a991a0758d7dabe84fb6d648a6b/Screenshot-2024-03-11-at-12.10.09-PM.png" data-mid="206525248" border="0"  src="https://freight.cargo.site/w/1000/i/ec77ee17ceb74024fe876f3c32e988c7140e4a991a0758d7dabe84fb6d648a6b/Screenshot-2024-03-11-at-12.10.09-PM.png" /&#62;
JPEG format of the file with a window overlay showing the file in TXT text format.
&#60;img width="8310" height="4676" width_o="8310" height_o="4676" data-src="https://freight.cargo.site/t/original/i/8db7f95fc4bdaadb6bf1b79b72eaa5bdbfa350225ad2591564bdfc44cdf53506/birds-barcelona-expanded.jpg" data-mid="209352864" border="0"  src="https://freight.cargo.site/w/1000/i/8db7f95fc4bdaadb6bf1b79b72eaa5bdbfa350225ad2591564bdfc44cdf53506/birds-barcelona-expanded.jpg" /&#62;
Expanded frame of original made with Adobe Photoshop Generative Expand feature.
&#60;img width="1170" height="2532" width_o="1170" height_o="2532" data-src="https://freight.cargo.site/t/original/i/ac641b000c96d063faa15e1926b1a598caca6ce99d86ea6f7e43d674a6f8acbe/IMG_2893.PNG" data-mid="206523941" border="0"  src="https://freight.cargo.site/w/1000/i/ac641b000c96d063faa15e1926b1a598caca6ce99d86ea6f7e43d674a6f8acbe/IMG_2893.PNG" /&#62;

iPhone screenshot of camera roll interface showing proof print of original image.&#38;nbsp;


&#60;img width="1170" height="2532" width_o="1170" height_o="2532" data-src="https://freight.cargo.site/t/original/i/4fae60d212daa94683eb3adb50e8d710ec5f3903639a70e95c37d345a9aa22e9/IMG_2895.PNG" data-mid="206525683" border="0"  src="https://freight.cargo.site/w/1000/i/4fae60d212daa94683eb3adb50e8d710ec5f3903639a70e95c37d345a9aa22e9/IMG_2895.PNG" /&#62;
&#60;img width="1170" height="2532" width_o="1170" height_o="2532" data-src="https://freight.cargo.site/t/original/i/bfb664d6a6fcee83ee86895d1d12e9aba8f3269661057b0dc484670198759df2/IMG_2896.PNG" data-mid="206525697" border="0"  src="https://freight.cargo.site/w/1000/i/bfb664d6a6fcee83ee86895d1d12e9aba8f3269661057b0dc484670198759df2/IMG_2896.PNG" /&#62;
&#60;img width="1170" height="2532" width_o="1170" height_o="2532" data-src="https://freight.cargo.site/t/original/i/a65684044fc01c71b25bb25afb9b7abc770c0145e346019ff35d9074e93cf711/IMG_2897.PNG" data-mid="206525760" border="0"  src="https://freight.cargo.site/w/1000/i/a65684044fc01c71b25bb25afb9b7abc770c0145e346019ff35d9074e93cf711/IMG_2897.PNG" /&#62;

Discerning ChatGPT4’s image description and meta-cognitive capablities.

&#60;img width="1052" height="1272" width_o="1052" height_o="1272" data-src="https://freight.cargo.site/t/original/i/3487e21406c6188fb3e73ca27763dec2fec043878bee2c3fcaaca2db93f2a473/Screenshot-2024-02-21-at-11.23.16-AM.png" data-mid="206536024" border="0"  src="https://freight.cargo.site/w/1000/i/3487e21406c6188fb3e73ca27763dec2fec043878bee2c3fcaaca2db93f2a473/Screenshot-2024-02-21-at-11.23.16-AM.png" /&#62;
Midjourney’s /describe feature producing prompts from the print of the original digital image.

&#60;img width="600" height="450" width_o="600" height_o="450" data-src="https://freight.cargo.site/t/original/i/6c93adeeffc3a45de2944f0c8da52b33ab2a1176685a79728aeae79fa9a574b2/polycam-ezgif.com-optimize.gif" data-mid="208154916" border="0"  src="https://freight.cargo.site/w/600/i/6c93adeeffc3a45de2944f0c8da52b33ab2a1176685a79728aeae79fa9a574b2/polycam-ezgif.com-optimize.gif" /&#62;GIF of Gaussian Splat scan of large print.




    
Midjourney6 output of original image + text prompt: “A painting inspired by and in the style of this long exposure photograph."



 </description>
		
	</item>
		
	</channel>
</rss>