Dataset Viewer
Auto-converted to Parquet Duplicate
ids
stringlengths
36
36
texts
stringlengths
212
1.03k
4f2034a7-ac6c-489a-881d-3aef4f1d0c0d
Industrial Language-Image Dataset (ILID): Adapting Vision Foundation Models for Industrial Settings Keno Moenck1,*Duc Trung Thieu1Julian Koch1Thorsten Sch ¨uppstuhl1 1Hamburg University of Technology, Institute of Aircraft Production Technology github.com/kenomo/ilid In recent years, the upstream of Large Language Mode...
1db71561-d2b5-4705-9d1a-510b84df1992
Here, fine-tuning the models or transfer learning on domain-specific data is unavoidable when objecting to adequate performance. In this work, we, on the one hand, introduce a pipeline to generate the Industrial Language- Image Dataset (ILID) based on web-crawled data; on the other hand, we demonstrate effective self-s...
bf8e7d1a-62eb-43ac-a9b5-16e6796b80e8
In the scope of training deep mod- els, industrial contexts1lack everyday objects and scenes, typically covered by publicly available datasets, which is why applications in these specialized domains here demand *Corresponding author: keno.moenck@tuhh.de 1We define the industrial domain as follows: industrial activities...
bbdcb2e9-d7eb-4157-959f-38d1e2a4c0f5
The availability of curated, publicly accessible datasets specific to industrial needs is exceedingly sparse, e.g., the MVTec [5–8], VISION [9], or tool recognition [10] datasets encapsulate only a limited spectrum of objects and sup- port only a handful of trainable tasks based on the provided ground truth annotations...
2cd18c18-9a39-419d-99e6-a944263c0bb5
These models, e.g., BERT [13], the well-known GPT-n series [14–16], or Llama [17–19], learn rich knowledge representations capa- ble of transcending to various downstream tasks. The shift in AI drives single tasks and single-modalities learners to a paradigm encompassing diverse tasks and multimodalities, which more cl...
25991450-a7f3-4f85-a8be-0e2832a4b0fd
Be- sides, given such large, partially unstructured datasets, only self-supervised or unsupervised methods are able to learn from the data effectively. A self-supervised approach capable of learning from text and image modalities is contrastive learning, in which a model learns to distinguish between positive and negat...
525e99cf-50bc-4f82-8020-5791a0cd2a22
trastive learning by contrasting positive and negative sam- ples in a batch, in the case of vision and language, is based on a text and image encoder. The idea is that the encoders are trained to output embeddings for the image and text, in- creasing the similarities of positive samples by decreasing the distance in th...
29480e67-4a8c-45bd-8e09-382af24a9a92
Since they are based on web-available data, not all cleaned, post-processed, and curated datasets are published, as in the case of CLIP. „… levelling feet round “ 0.64 „… collet “ 0.24 „… aluminium profile “ 0.05 „… button “ 0.03 „…“ … „… button “ 0.33 „… collet “ 0.22 „… magnetic ball joint “ 0.22 „… axial j...
71f07a77-fb41-497a-ad90-e342aecd5cd6
CLIP on the task of classification after (a) transfer learn- ing on the Industrial Language-Image Dataset (ILID) and (b) the zero-shot baseline results. VFMs exhibit rich knowledge representations, are adapt- able to various downstream tasks, and generalize better than conventional models, but only to a certain extent ...
e589ab17-f7e4-46b5-91da-6eeab7db4980
In this work, we try to make a step in the direction of utilizing VFM capabilities in specialized industrial domains by con- tributing three-folded: • We propose a method to generate the IndustrialLanguage-Image Dataset (ILID) from web-crawled data and release a version that covers objects from different industrial-rel...
4c4b78fe-25b8-45cb-9b4a-86b79f805c54
Besides, comparing only one established model on the data increases the focus, clarity, and depth of the findings in the scope of this work. Nevertheless, we encourage the reuse of ILID with other strategies or also employ further fine-tuning and transfer learning strategies. The rest of this work is structured as foll...
b4a9809a-e533-4111-88e8-ef4bd77d466d
VFMs in industrial applications Code recognition, object or position recognition, complete- ness, shape/dimension check, or quantitive or qualitative inspection are typical vision applications in manufacturing [26]. While in manufacturing, these are often suited toward narrow fields of view and close to the object; in ...
34dca95d-b4e3-4aa5-88da-46ff6d073f03
2Since the data from the web do not belong to us, we are not allowed to publish the images and texts, but we provide the final post-processed metadata, which can be used to reassemble the dataset. Please contact the corresponding author. 2
b9b7dfea-5a24-475a-bb5a-490955f095f8
(3) (Downstream) tasks (1) Industrial Language - Image Dataset (ILID) Web catalog crawling Dataset processing Image Image Classification Segmentation „…hinge…“ „…handle…“ „… rod end…“ „…“ Text Encoder Image Encoder maximizing the score for non - contrasting samples { bores for counter - sunk screws } { h inge, det...
a806b244-2c38-443b-80f2-9697786a49e8
detachable , type R, xmm “ , data: [ „Hinges are distinguished by their compact and robust construction. The assortment of materials...”, „Zinc die casting ...”, „...“, ] }, { … }, … ] TextFigure 2. Overview of this work’s method: (1) generation of the Industrial Language-Image Dataset (ILID), (2) transfer learni...
53a75ba1-4243-475f-9a64-ddb1fd3f6b2c
[28] discusses the abilities of the Segment Anything Model (SAM), a class- agnostic segmentation model that outstandingly generalizes to unseen objects and scenes, in the context of vision ap- plications in the aircraft industry, including manufacturing, intralogistics, and MRO. [30] name two use cases in PCB defect in...
b875aca6-6228-4e2b-b387-4d0bd9d96e3a
It is not an entirely novel approach; however,the origin of the idea of learning from perceptions in natural language is not exactly dated to specific research. In 1999, [33] explored retrieving words for unknown images based on statistical learning to predict nouns and adjectives. In 2007, [34] demonstrated learning i...
aed90d94-616b-44e3-8402-7f73f488c913
The encoder can be reused from other models and training, e.g., demonstrated by OpenScene [38], which employs a frozen text and 2D image encoder while training a 3D point cloud encoder for language-based 2D/3D scene understanding. The encoder models are trained to comple- ment and comprehend each other fully by encodin...
ad48f083-191b-4bb7-8906-d06fd1105b3a
images that are not connected, as shown in Fig. 3. Besides prompting for the object’s name, a sufficiently trained text encoder would encode, e.g., conceptual close activities near the object’s name embedding (s. Fig. 3). {photo of a hinge} {cat} {house} {gripping} Figure 3. Joint embedding space of text and image repr...
50dd7115-c548-4643-93c9-61ad337f8d0a
The embedding similarities between pairs are represented by the cosine similarity metric, which is used to optimize the cross-entropy loss in order to build the most optimized versions of both the image and text encoder at the same time. 2.2.2 Performance Zero-shot CLIP achieves similar performance or even out- perform...
834949f2-3bdf-42af-a550-565095b05870
When comparing CLIP and other large pre-trained n-shot models such as BiT-M [42] and SimCLRv2 [43], CLIP’s authors depict that the zero-shot performance outperforms all other models on the metric of average accuracy score up to 4-shot linear classi- fiers trained on the same dataset [23]. The limitations arethat scalin...
031e103c-a527-4ce7-9ff4-35108a47f1f4
De- CLIP employs supervision across modalities as well as self- supervision within each modality, whereas ReCLIP at first learns pseudo labels and then applies cross-modality self- supervision. CoCA, on the other hand, skips cross-attention in some decoder layers to encode unimodal text represen- tations and cross-atte...
99376fdf-7135-4b57-a274-4e64cd677111
Since this work focuses mainly on the training data, we will not evaluate all the individual strategies that aim to in- crease performance. Instead, we use the vanilla CLIP model and employ basic transfer learning methods that we can em- ploy with limited hardware resources, which also demon- strate the effectiveness i...
8322bfaf-71c2-4ec1-b80a-9a498ec4476c
This process is usually more suitable for adapting to small sets of data that are closely related to the dataset CLIP was pre-trained on, such as everyday objects and general concepts. On the other hand, in tasks where the dataset is too specific, i.e., specialized knowledge, transfer- learning is better suited, as it ...
3925cbe7-e2c3-4e9b-a56c-d12057dfe45f
(2) Web crawling (3) Pre - filtering (4) Processing (5) Post - filtering (6) Downloading (1) Online catalogs [ {…}, {…}, … ] LLM [ {…}, {…}, … ] Figure 4. Dataset generation pipeline resulting in the Industrial Language-Image Dataset (ILID). the zero-shot model are preserved and optimized for gen- eralization to novel...
f1c46544-8d6d-41fa-8fd4-7e9dd99463b7
Notable works in transfer-learning of CLIP are adapter- styled tuning, e.g., CLIPAdapter [52], and prompt learning, e.g., CoOp [53] and APEX [54]. CLIPAdapter (s. Fig. 5) adds dense down- and up-sampling layers on top of CLIP either to the image, text, or both encoders. Thereby, only the most prominent features are com...
475e70ad-9476-4c0f-9199-45f497467194
Concretely, CoOp creates a set of learnable vectors, initialized by random values or given text embeddings, which, during training, the model adapts to. APEX is the most recent approach that also eval- uates adding learnable tokens to the residual transformer blocks in the image encoder. Besides, APEX introduces a resi...
a08a7bd6-0dc8-49b2-a50b-c45bcff0d968
Dataset generation pipeline Following a typical data pipeline structure, including data selection, transforming, and pre-/post-filtering (s., e.g.,[22]), we employed six steps (s. Fig. 4) to generate the In- dustrial Language-Image Dataset (ILID). Each of the steps results in a structured JSON document containing all t...
fa851881-cb33-4fbe-8ec8-be2080599eeb
2.Web crawling data from online catalogs follows two ba- sic steps: getting the sitemap from robots.txt and writing a crawler for the specific structure of the product pages. The top-level robots.txt file delineates the Robots Exclu- sion Protocol , which guides crawlers and other bots on which sections of the website ...
72e39c3d-be61-4221-bc66-a394bd766afd
Besides a central label tag for each entry, we save an unstructured list-typed data object, which can contain all other avail- able information about the product, like materials, finish, colors, etc. Using the sitemap as the initial crawling en- try point is a common step in every online search engine. 3. In the pre-fi...
0e657c86-7ced-4273-830b-b4c34fe59fa3
We define these as (1) a long label describing the product, (2) a short label that is shorter than the long label, (3) a description of the prod- uct, (4) the material, (5) the finish or color of the product (s. also Fig. 2). In our study, we used Llama3-8B [19] 3Scrapy: A Fast and Powerful Scraping and Web Crawling Fr...
efea53e0-fdc6-4cda-8645-4442bffde53c
in the fine-tuned instruct version (s. 6 for the respective prompt). We ask the LLM not to output any numbers or sizes; additionally, we remove them from the initial data since, on the one hand, we do not expect that a 2D image task can identify or recognize any dimensional quantities given different camera positions a...
4c4ab7e9-826f-4c0d-833c-a3e09d3c115a
With the given steps, we are able to extract a product’s image and a structured set of five pieces of information. Be- sides, we observed that even a small model such as Llama3- 8B in its instruct fine-tuned version is mostly able to extract the demanded information from the bunch of unstructured text. We show an excer...
538c8a47-e1a5-470e-b698-5748ee193530
While we estimate that the images we want to learn but also infer from show similar characteristics as CLIP’s in- distribution data compared to other fully out-of-distribution image data as in the case of, e.g., PatchCamelyon [41] (s. Sec. 2.2.2), we employ on the image stream only a sim- ple trainable adapter as propo...
1f5f327c-74ec-44ab-95d4-8cfae66b27af
In contrast, prompt engineering is a crucial task for learning, as well as inference with textual, promptable mod- els. In a preliminary study, we have already observed that vanilla CLIP performs differently, given different prompt templates like ”a photo of {}. ”compared to ”a photo of{}, an industrial product. ” The ...
8b3feab4-6369-4f1c-bc12-6199de352bc7
3.2.2 Training During the pre-training of CLIP, a very large minibatch size of 32,768was used, which took for the largest Vision Transformer (ViT) configuration (428M parameters) a to- tal of 12 days on 256 V100 GPUs [23]. Compared to the pre-training, during transfer learning with CoOp, we have a total of cn×512traina...
6176c154-1523-4a2a-b1aa-dede6f3b6c30
In contrast, fine-tuning or transfer learning approaches typ- ically contrast all possible class labels against a set of im- ages [52–54, 56] during the benchmark studies on datasets like ImageNet [57], which is why non-contrasting samples are not possible as long as the classes are conceptually far away from each othe...
501e42cb-a4c1-4765-9bfb-b94773f45408
We changed from vanilla SGD to Adadelta [58], an SGD optimizer that adapts learning rates over time using only first-order information. 4. Experiments In this section, we present a series of studies utilizing ILID, designed to evaluate the effectiveness of the dataset and transfer learning approach for different tasks....
337bb4be-b7a5-40b3-ac5e-3ed9183af4c9
Adapter + 1 - α α Adapter Image Encoder Text Encoder Adapter Prediction „…hinge…“ „…handle…“ „… rod end…“ [P 1 ] [P 2 ] [ P x ] [..] „hinge“ Image Encoder Text Encoder Prediction Learnable Frozen (a) CLIP Adapter (b) CoOp Figure 5. The architectures used in this work: (a) CLIPAdapter [52] and (b) CoOp [53]. 4.1....
0549c4e5-cd6e-49f3-b339-ab6d3971b3cc
steel stainlessclampplungerclampingleverball indexingadjustablewith aluminumhandleknob connectorspringhingelatchprofileplastictogglelinearhandswivelgrip bearinggear assemblyvalve handlesstarfeet handwheellevelingscrewlockplaterollersetjoint aluminium103 Figure 6. Top- 40word occurrences in label label short . Fig. 6 de...
bb6c01d7-9118-4e87-8855-d7487f3c3ad3
So, nearly every sample has a unique description , but only two labels, on average, share the same label short . Since we do not account for minor preposition words like a/an/the in the labels, the labels are slightly more equal on the se-mantically level. However, we estimate a good diversity in the dataset, and since...
a7b29b64-6277-48a8-917d-eb2ee904c2b1
Setup We build upon the code base of Dassl [59, 60] and trained on a single 4090 GPU. We chose random sampling, an image input size of 224×224, and CLIP’s pre-trained ViT-B/16 as the image encoder instead of the ResNet version, as ViTs have much less image-specific inductive bias than CNNs. We initialized CoOp with a c...
8056e75c-3cb2-47c3-89f5-e0214ca27a68
We use Adadelta [58] with a learn- ing rate of 0.15and a cosine learning rate scheduler with a weight decay of 1e-3. Besides, we used 3warm-up epochs with a constant learning rate of 1e-2to prevent rapid param- 7
40c40951-06d9-45f5-9af1-8afcaa0da539
eter changes in the initial training stages, which can lead to early overfitting. 4.3. Quantitative results Since we do not have a different real-world language- image dataset at hand, we used 6-fold cross-validation during the evaluation of the different model architectures. Fig. 7 depicts the validation results of tr...
23165372-5e95-4fa7-84d6-b55040da54f6
1) is that all transfer learning approaches effectively outperform CLIP’s zero-shot capa- bilities, even the top-3 accuracies after training for ≈20 epochs, highlighting that the ILID is out-of-distribution. Even training on the less information-rich label short out- performs CLIP’s zero-shot capabilities. CLIP highly ...
409cd147-2f6e-4950-ae00-1b4e15ce643a
As expected, the more trainable weights we add, the better the model adapts to the data, while the overall do- main generalization to the in-distribution data achieves in the case of label short andlabel long an accuracy of maxi- mum 79.93% and84.31%, respectively, an image adapter is crucial to effective transfer lear...
93a6d58b-205f-4186-a25d-6a872dfa52be
To gain an understanding of how transfer learning af- fects the embeddings further, we derived the image and text embeddings after training on the full ILID given the label label short for100 epochs. Fig. 8 visualizes thehigh-dimensional embeddings of the same 100 samples. With each transfer learning method, adding mor...
5b9e24eb-f3a0-4c94-a429-f92a8be0d001
Prompting for materials Besides training and testing on the label short and la- bellong, we additionally trained CoOpIATA for 100epochs on the material label with the initial prompt ”X X X X a photo of an industrial product with material {}”. We then evaluated the zero-shot and CoOpIATA performance on the images depict...
de75c89f-f3a9-454a-8a17-9d66c4efac40
In- terestingly, a prompt including {aluminum }results in lower scores than using the word {aluminium }, which points out that the subtleties or discrepancies of the language used in an industrial context are not mapped after the transfer learn- ing nor in the zero-shot case. That is why we added both words in the prom...
6febc2bb-52b7-4d86-a084-b70a1aea7cdd
These results again underline a natural language supervised VFM’s rich multimodal capabilities. 4.5. Language-guided segmentation A typical downstream task is a language-guided segmen- tation utilizing the Segment Anything Model (SAM) [61]. SAM is a class-agnostic point promptable image segmen- tation model that output...
80760d96-3b98-4193-8dd1-95de3b02da0b
10 20 30 40 50 60 70 80 90 100 Epoch2030405060708090x-val/acc (%)(a) Label Short CoOp (top-1) CoOpIA (top-1) CoOpIATA (top-1/3) CLIPAdapter (top-1) Zero-shot CLIP (top-1/3) 10 20 30 40 50 60 70 80 90 100 Epoch2030405060708090x-val/acc (%)(b) Label LongFigure 7. Results of 6-fold cross-validation during transfer learnin...
65434ad0-b437-4657-a987-cacb6c2822f1
sample a point grid and subsequently use Non-Maximum Suppression (NMS) to diminish through merging a large set of masks to form more precise proposals. In the sim- plest form, language-guided image segmentation based on SAM and CLIP can be employed by applying CLIP onto all generated masks, which we cut out with a part...
e61413d8-e564-4d8c-8155-334851f1a586
For completeness, it should be mentioned that we did not compare it against the other ap- proaches, e.g., CLIPAdapter. Fig. 10 depicts the segmentation results in a challenging scene composed of multiple collets stacked on a trolley. The zero-shot results do have many true positives, but overall, we are not able to obs...
232e949e-3543-421e-8ba4-a5194f23808a
Conclusion and Outlook Using VFMs as a building block in an industrial vision ap- plication is a promising and transforming technique, im- proving systems’ accuracy, speed, and reliability, e.g., in- volved in inspection, robotic control, parts identification, and process control, leading to enhanced operational effi- ...
502cc711-e1ad-46d0-90c9-68e606cfdb1e
(a) (b) (c) (d) (e) Figure 9. Five different real-world images used for prompting material properties. Input Zero-shot CLIP Ours Figure 10. Language-guided segmentation results given prompt ”collet” compared to zero-shot CLIP under the same settings (segmentation properties and thresholds). Table 2. Scores on pre...
b2896e49-1897-42de-bfa3-3687981694b2
097 ”aluminum or aluminium” 0.043 0.143 0.166 0.238 0.094 ”anodized aluminum or 0.030 0.143 0.070 0.064 0.023 aluminium” ”plastic” 0.352 0.244 0.099 0.107 0.280 ”brass” 0.156 0.020 0.223 0.282 0.240 CoOpIATA trained on the material label ”steel” 0.007 0.033 0.950 0.829 0.137 ”polyamide” 0.135 0.368 0.004 0.008 0.361 ”t...
39ce2dc5-052b-4398-b094-3eeeb721dacb
020 0.011 0.001 ”anodized aluminum or 0.007 0.374 0.003 0.007 0.001 aluminium” ”plastic” 0.694 0.135 0.008 0.041 0.077 ”brass” 0.139 0.000 0.012 0.104 0.264 introducing the Industrial Language-Image Dataset (ILID) to bring industrial context into CLIP and evaluating ef- fective self-supervised transfer learning from th...
618b1138-0a2c-4b55-bb2a-bc72dca91dad
One can argue that the bigger digital giants like OpenAI or Meta can also incorporate industrial data during the train- ing of their models; however, the overall proposed method from dataset curation to fine-tuning CLIP also suits, e.g., companies with intellectual property constraints or limita- tions in available com...
bcb5b328-fb24-4104-97d8-fe4dcf0e0875
The con- fusion between the same concept but differently termed in American (aluminium) and British (aluminum) English shows that there is a need for pre-training of the text encoder with broader natural language, e.g., even with extended con- text, which would enable not only training on shorter image labels. Further,...
3dba94ab-c179-40e0-bf74-4b3bfc2ab21c
most limiting characteristic is including or inferencing with dimensional quantities, which can hardly be solved when training on images captured with different cameras and their individual intrinsics. With this work, we hope to encourage the industrial com- munity to employ and work on using VFM in the industrial doma...
6d36576a-041a-4ef7-9743-ac41309a839f
2, 3, 5, 11, and 12) in this publication. CRediT author statement K. Moenck: Conceptualization, Methodology, Software, Validation, Formal analysis, Investigation, Resources, Data Curation, Writing – original draft, Writing - review & editing, Visualization, Supervision, Project administration; D.T. Thieu: Conceptualiza...
b76eaa09-c104-4621-bd39-252dffdf7d7f
doi:10.1016/j.procir.2021.11.211 . 1 [2] D. Schoepflin, K. Iyer, M. Gomse, T. Sch ¨uppstuhl, Towards synthetic ai training data for image classification in intralo- gistic settings, in: Sch ¨uppstuhl (Ed.) 2022 – Annals of Sci- entific Society, Springer Cham, 2022, pp. 325–336. doi: 10.1007/978-3-030-74032-0_27 . [3] D...
e4e5881a-3b57-4a76-910b-df1c5db494b4
), Flexible Automation and Intelligent Man- ufacturing, Lecture Notes in Mechanical Engineering Ser, Springer International Publishing AG, Cham, 2022, pp. 284– 292.doi:10.1007/978-3-031-18326-3_28 . [4] O. Schmedemann, M. Baaß, D. Schoepflin, T. Sch ¨uppstuhl, Procedural synthetic training data generation for ai-based ...
836bee01-b230-41d7-816a-f29142e42f08
1[5] B. Drost, M. Ulrich, P. Bergmann, P. Hartinger, C. Steger, Introducing mvtec itodd — a dataset for 3d object recog- nition in industry, in: 2017 IEEE International Conference on Computer Vision Workshops (ICCVW), IEEE, 2017, pp. 2200–2208. doi:10.1109/ICCVW.2017.257 . 1 [6] P. Bergmann, M. Fauser, D. Sattlegger, C...
f33a3f83-6de4-44b1-a116-538b91b6a8b3
[7] P. Bergmann, K. Batzner, M. Fauser, D. Sattlegger, C. Ste- ger, The mvtec anomaly detection dataset: A comprehensive real-world dataset for unsupervised anomaly detection, In- ternational Journal of Computer Vision 129 (4) (2021) 1038– 1059. doi:10.1007/s11263-020-01400-4 . [8] P. Bergmann, K. Batzner, M. Fauser, D...
a57d219e-3d0f-44f0-857a-0058d21d3fa2
1 [9] H. Bai, S. Mou, T. Likhomanenko, R. G. Cinbis, O. Tuzel, P. Huang, J. Shan, J. Shi, M. Cao, Vision datasets: A bench- mark for vision-based industrial inspection (2023). doi: 10.48550/arXiv.2306.07890 . 1 [10] L. B ¨usch, J. Koch, D. Schoepflin, M. Schulze, T. Sch ¨uppstuhl, Towards recognition of human actions i...
7c93d51f-228c-4b22-af93-35a69e079355
1 [11] J. Zhang, J. Huang, S. Jin, S. Lu, Vision-language mod- els for vision tasks: A survey, IEEE transactions on pat- tern analysis and machine intelligence PP (2024). doi: 10.1109/TPAMI.2024.3369699 . 1 [12] R. Bommasani, D. A. Hudson, E. Adeli, R. Altman, S. Arora, S. von Arx, M. S. Bernstein, J. Bohg, A. Bosselut...
9034e9b1-8472-4d09-bf93-953286f44b32
doi:10.48550/arXiv.1810. 04805 . 1 [14] P. Budzianowski, I. Vuli ´c, Hello, it’s gpt-2 – how can i help you? towards the use of pretrained language models for task-oriented dialogue systems (2019). doi:10.48550/ arXiv.1907.05774 . 1 [15] T. B. Brown, B. Mann, N. Ryder, M. Subbiah, J. Kaplan, P. Dhariwal, A. Neelakantan...
a5b7dc71-e655-4074-9a48-47548e2d008a
[16] J. Achiam, S. Adler, S. Agarwal, L. Ahmad, I. Akkaya, F. L. Aleman, D. Almeida, J. Altenschmidt, S. Altman, S. Anad- kat, et al., Gpt-4 technical report (2023). doi:10.48550/ arXiv.2303.08774 . 1 [17] H. Touvron, T. Lavril, G. Izacard, X. Martinet, M.-A. Lachaux, T. Lacroix, B. Rozi `ere, N. Goyal, E. Hambro, 11
770a1100-949f-4c87-8925-0d1fcc643f47
F. Azhar, et al., Llama: Open and efficient foundation lan- guage models (2023). doi:10.48550/arXiv.2302. 13971 . 1 [18] H. Touvron, L. Martin, K. Stone, P. Albert, A. Almahairi, Y . Babaei, N. Bashlykov, S. Batra, P. Bhargava, S. Bhosale, et al., Llama 2: Open foundation and fine-tuned chat models (2023). doi:10.48550...
763e8fcd-cc58-4f20-ae7e-a894f0e41a38
URL https://ai.meta.com/blog/meta-llama- 3/1, 5 [20] M. Awais, M. Naseer, S. Khan, R. M. Anwer, H. Cholakkal, M. Shah, M.-H. Yang, F. S. Khan, Foundational models defining a new era in vision: A survey and outlook (2023). doi:10.48550/arXiv.2307.13721 . 1 [21] C. Schuhmann, R. Beaumont, R. Vencu, C. Gordon, R. Wightman...
47d6ea10-0f26-4c5b-a192-03741ae7f5d1
1, 2 [22] S. Changpinyo, P. Sharma, N. Ding, R. Soricut, Conceptual 12m: Pushing web-scale image-text pre-training to recog- nize long-tail visual concepts (2021). doi:10.48550/ arXiv.2102.08981 . 1, 2, 5 [23] A. Radford, J. W. Kim, C. Hallacy, A. Ramesh, G. Goh, S. Agarwal, G. Sastry, A. Askell, P. Mishkin, J. Clark, ...
b3390305-d11b-46b3-aee8-852376d0a028
Chen, Z. Parekh, H. Pham, Q. Le V , Y . Sung, Z. Li, T. Duerig, Scaling up visual and vision-language representation learning with noisy text su- pervision, International Conference on Machine Learning (2021). doi:10.48550/arXiv.2102.05918 . 1 [25] K. He, X. Zhang, S. Ren, J. Sun, Deep residual learning for image recog...
d35886ed-fe60-44a0-87f0-6e15cd69da2d
2 [27] A. Naumann, F. Hertlein, L. D ¨orr, S. Thoma, K. Furmans, Literature review: Computer vision applications in trans- portation logistics and warehousing (2023). doi:10. 48550/arXiv.2304.06009 . 2 [28] K. Moenck, A. Wendt, P. Pr ¨unte, J. Koch, A. Sahrhage, J. Gierecker, O. Schmedemann, F. K ¨ahler, D. Holst, M. G...
459277bc-9089-4d4c-abd3-b89a57ebc086
Wang, J. Yang, X. Wang, S. Wang, O. Kwan, A framework and operational procedures for metaverses-based industrial foundation models, IEEE Trans- actions on Systems, Man, and Cybernetics: Systems 53 (4) (2023) 2037–2046. doi:10.1109/TSMC.2022. 3226755 . 3[30] H. Zhang, S. S. Dereck, Z. Wang, X. Lv, K. Xu, L. Wu, Y . Jia,...
44643e78-7005-4ac9-9550-3a06d7664f8c
3 [31] L. Makatura, M. Foshey, B. Wang, F. H ¨ahnLein, P. Ma, B. Deng, M. Tjandrasuwita, A. Spielberg, C. E. Owens, P. Y . Chen, et al., How can large language models help humans in design and manufacturing? (2023). doi:10.48550/ arXiv.2307.14377 . 3 [32] C. Picard, K. M. Edwards, A. C. Doris, B. Man, G. Gian- none, M....
48913af9-3005-4e19-bb09-3a6c6c7fa73c
Mori, H. Takahashi, R. Oka, Image-to-word transforma- tion based on dividing and vector quantizing images with words, in: First international workshop on multimedia in- telligent storage and retrieval management, V ol. 2, 1999. 3 [34] A. Quattoni, M. Collins, T. Darrell, Learning visual rep- resentations using images w...
b7ca61c3-ea44-499f-b1ea-650aa6f92232
3 [36] M. B. Sariyildiz, J. Perez, D. Larlus, Learning visual rep- resentations with caption annotations (2020). doi:10. 48550/arXiv.2008.01392 . 3 [37] Y . Zhang, H. Jiang, Y . Miura, C. D. Manning, C. P. Langlotz, Contrastive learning of medical visual representations from paired images and text (2020). doi:10.48550/...
eaf4e12f-26fb-4404-80f1-ae1309a34590
3 [39] P. Helber, B. Bischke, A. Dengel, D. Borth, Eurosat: A novel dataset and deep learning benchmark for land use and land cover classification (2017). doi:10.48550/arXiv. 1709.00029 . 4 [40] G. Cheng, J. Han, X. Lu, Remote sensing image scene clas- sification: Benchmark and state of the art, Proceedings of the IEEE...
4afb6109-0b8e-4d2f-a02e-a46ef6aa886c
4, 6 [42] A. Kolesnikov, L. Beyer, X. Zhai, J. Puigcerver, J. Yung, S. Gelly, N. Houlsby, Big transfer (bit): General visual rep- resentation learning (2019). doi:10.48550/arXiv. 1912.11370 . 4 [43] T. Chen, S. Kornblith, K. Swersky, M. Norouzi, G. Hinton, Big self-supervised models are strong semi-supervised learn- er...
e3c3b183-f591-460b-9c5f-612f03b6cab9
trastive language-image pre-training paradigm (2021). doi: 10.48550/arXiv.2110.05208 . 4 [45] S. Goel, H. Bansal, S. Bhatia, R. A. Rossi, V . Vinay, A. Grover, Cyclip: Cyclic contrastive language-image pre- training (2022). doi : 10 . 48550 / arXiv . 2205 . 14459 . [46] X. Hu, K. Zhang, L. Xia, A. Chen, J. Luo, Y . Sun...
b3092cc1-6c8b-472e-b10a-3474f253675e
Vasudevan, L. Yeung, M. Seyedhosseini, Y . Wu, Coca: Contrastive captioners are image-text foun- dation models (2022). doi:10.48550/arXiv.2205. 01917 . 4 [48] Y . Rao, W. Zhao, G. Chen, Y . Tang, Z. Zhu, G. Huang, J. Zhou, J. Lu, Denseclip: Language-guided dense prediction with context-aware prompting (2021). doi:10.48...
a8240f23-30ad-4c44-ba62-e08f33a153a6
doi:10.48550/ arXiv.2111.07783 . 4 [50] N. Mu, A. Kirillov, D. Wagner, S. Xie, Slip: Self-supervision meets language-image pre-training (2021). doi:10. 48550/arXiv.2112.12750 . 4 [51] Q. Sun, Y . Fang, L. Wu, X. Wang, Y . Cao, Eva-clip: Im- proved training techniques for clip at scale (2023). doi: 10.48550/arXiv.2303.1...
0937333d-8249-48d3-9045-30e46ac3b004
04544 . 5, 6, 7 [53] K. Zhou, J. Yang, C. C. Loy, Z. Liu, Learning to prompt for vision-language models (2022). doi:10.1007/ s11263-022-01653-1 . 5, 6, 7 [54] Y . Yang, J. Ko, S.-Y . Yun, Improving adaptability and generalizability of efficient transfer learning for vision- language models (2023). doi:10.48550/arXiv. 2...
f2a35e0f-5f86-427e-b745-4f3ca637cfee
5 [56] K. Zhou, J. Yang, C. C. Loy, Z. Liu, Conditional prompt learning for vision-language models (2022). doi:10. 48550/arXiv.2203.05557 . 6 [57] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, L. Fei-Fei, Imagenet: A large-scale hierarchical image database, in: IEEE Conference on Computer Vision and Pattern Recog- niti...
c0d895ae-3d10-450e-81c1-6fa0850650de
6, 7 [59] K. Zhou, Y . Yang, Y . Qiao, T. Xiang, Domain adaptive en- semble learning (2021). doi:10.1109/TIP.2021. 3112012 . 7[60] K. Zhou, Z. Liu, Y . Qiao, T. Xiang, C. C. Loy, Domain gen- eralization: A survey (2023). doi:10.1109/TPAMI. 2022.3195549 . 7 [61] A. Kirillov, E. Mintun, N. Ravi, H. Mao, C. Rolland, L. Gu...
ff10f718-4361-40c8-84fe-03f5bfb3ba09
8 [62] H. Wang, P. K. A. Vasu, F. Faghri, R. Vemulapalli, M. Fara- jtabar, S. Mehta, M. Rastegari, O. Tuzel, H. Pouransari, Sam- clip: Merging vision foundation models towards seman- tic and spatial understanding (2023). doi:10.48550/ arXiv.2310.15308 . 14 13
59e422f0-5a1d-4530-8101-a848632edf1f
6. Llama-3 prompt We followed basic prompt assembly as described for Llama-2 [18] because up to the date of this publication, there has still been an in-depth explanation of Llama-3 missing. The Llama-2 chat version was trained with a va- riety of system prompts following patterns like ”You are a helpful, respectful an...
8a35fc0b-1b11-4724-ab0a-56272e090675
\n Do n o t ask f o r f u r t h e r d e t a i l s or s t a t e a d d i t i o n a l q u e s t i o n s . \n Do n o t add a d d i t i o n a l i n f o r m a t i o n or d e t a i l s t h a t a r e n o t g i v e n by t h e u s e r . \n Listing 2. User prompt used in the ILID generation pipeline’s text transformation step.
019286ba-0536-437e-a264-caa82dda1922
Summarize ’ Label : {{}} Text : {{}} ’\n r e t u r n i n g t h e f o l l o w i n g i n f o r m a t i o n : \n ( 1 ) a lo ng l a b e l or name of t h e p r o d u c t w i t h o u t i d s , numbers , codes , or s i z e s ( 2 ) a s h o r t l a b e l or name of t h e p r o d u c t w it h a maximum of 4 words and s h o r t e...
b35c4027-8c31-4c57-97a3-21e8660cdb0e
or s i z e s ( 4 ) m a t e r i a l wi th a maximum of 5 words ( 5 ) m a t e r i a l f i n i s h / c o l o r w ith a maximum of 5 words 7. Excerpt from the dataset Fig. 11 and Fig. 12 depict each two samples from the ILID given the keywords ”hinge” and”locking assembly” . Based on the language label, we can observe that...
5c75097a-0975-4785-be35-fb32e69136f3
13, we prompted for”socket” , whereas zero-shot CLIP does not predict any mask as positive, while our approach segments all sockets. In Fig. 14, the results of our most challenging scene are depicted, in which we prompt for ”bracket for construction profile” .
22cea8fb-58d1-4306-8a2e-537883a30c34
The brackets are imaged far differently than the ones from catalog images, and sometimes they are barely { "id": "...", "image": "...", "label_short": "clevis mounting hinge", "label_long": "bracket hinge for clevis mounting", "description": "Rigid hinge for clevis mounting ap...
98de050c-0103-429f-a73a-4306f28510cd
{ "id": "...", "image": "...", "label_short": "locking qpq assembly", "label_long": "locking assembly with qpq coating", "description": "High corrosion resistance and improved fatigue strength for food safe applications", "material": "steel", "material_f...
16b66c62-6e31-4d8c-8910-cb58586b595b
At first sight, the results do not show good perfor- mance, especially since we have a few non-detected brack- ets and a few false positive predictions. We explain the false positive on the top with the cropping strategy, while we have no explanation for the false predictions on the lower right. The false positives can...
0aa5e298-dcff-4b44-80a3-1cb0265652f2
Input Zero-shot CLIP OursFigure 13. Language-guided segmentation results given the prompt ”socket” compared to zero-shot CLIP under the same set- tings. Input Zero-shot CLIP OursFigure 14. Language-guided segmentation results given the prompt ”bracket for construction profile” compared to zero-shot CLIP under the same ...

This dataset was created using Corpus Creator. This dataset was created by paring a corpus of texts into chunks of sentences using Llama Index.

Downloads last month
3