Posts

Sort by:
Post not yet marked as solved
0 Replies
1 Views
I have a customer who wants to protect the REST API of their app with a private certificate. They would then distribute the client certificate to the authorized users. Their app would not work unless the client certificate is already installed on the user's phone before they run the app. I have never done this before. Is it possible to install a client certificate on an iPhone without running an app, for example if it were sent in an email message? And if it is possible, is App Review going to let such an app into the app store? Thanks, Frank
Posted
by
Post not yet marked as solved
0 Replies
6 Views
I noticed that I have duplicate Apps on my iPhone! I've signed up for the Public Beta program, so this might be a feature/bug in the Beta. What I first noticed was that one of my apps showed up in the Doc (I don't remember putting it there) a few days ago, and in looking around, I still have the same app in a folder on my home screen. I thought this could be a nice feature allowing me quick access to it from the Dock and from my folder, but it wasn't the app that I'd want there. I thought I might be able to move it off the dock and back to the folder, but when I did that, the app showed up twice in the folder! I moved a different app to the dock, and it disappeared from the folder which is what I'd expect. So...if I delete 1 of the duplicates in the folder (or from the dock), will that delete both? Here you can see the Alula app in both the 'Remote Control' folder and the Dock: Here too (although the dock is fuzzy because the folder is open): I moved the app from the dock back to the folder, and now there are two there: Here's the Public Beta that I'm running
Posted
by
Post not yet marked as solved
0 Replies
6 Views
Hello, everyone. I'm using Bubble.io to develop a matching app for a transportation company and try to publish it to the App Store by using BDK plugin. The URL of the matching app is as follows https://carrier-s.com I received the following message from the App Store, and I'm not sure what to do. The いいね can be purchased in the app using payment mechanisms other than in-app purchase. If anyone knows anything about in-app purchases, please let me know. Currently, payments are made via Stripe.
Posted
by
Post not yet marked as solved
0 Replies
8 Views
Hello, I've been trying to get my org approved but Apple keeps asking me for documents. I have uploaded certificates of incorporations and that still doesn't work. I wish they were more clear on what type of documentation I need to provide. [Edited by Moderator]
Posted
by
Post not yet marked as solved
0 Replies
12 Views
I am trying to determine the corners of a RoomPlan-detected wall using the information available in the ARView session's frame, but can't quite figure out what I'm doing wrong. The corners appear to be correct relative to each other, but the wall appears too large when I render it. (I'm also not sure I'm handling the image rotation correctly either, which may be compounding my problem). Here is the code I currently have, along with a sample image, and the resulting image when I pass it through the perspective filter. it is close but isn't cropping the walls and floors correctly. func captureSession(_ session: RoomCaptureSession, didChange room: CapturedRoom) { for surface in room.walls { if let frame = self.arView.session.currentFrame { var image: CGImage? = nil VTCreateCGImageFromCVPixelBuffer(frame.capturedImage, options: nil, imageOut: &image) let wallTransform = surface.transform let cameraTransform = frame.camera.transform let intrinsics = frame.camera.intrinsics let projectionMatrix = frame.camera.projectionMatrix let width = surface.dimensions.y let height = surface.dimensions.x let inverseCameraTransform = simd_inverse(cameraTransform) let wallTopRight = simd_float4(width/2, height/2, 0, 1) let wallTopLeft = simd_float4(-width/2, height/2, 0, 1) let wallBottomRight = simd_float4(width/2, -height/2, 0, 1) let wallBottomLeft = simd_float4(-width/2, -height/2, 0, 1) let worldTopRight = wallTransform * wallTopRight let worldTopLeft = wallTransform * wallTopLeft let worldBottomRight = wallTransform * wallBottomRight let worldBottomLeft = wallTransform * wallBottomLeft let cameraTopRight = projectionMatrix * inverseCameraTransform * worldTopRight let cameraTopLeft = projectionMatrix * inverseCameraTransform * worldTopLeft let cameraBottomRight = projectionMatrix * inverseCameraTransform * worldBottomRight let cameraBottomLeft = projectionMatrix * inverseCameraTransform * worldBottomLeft let imageTopRight = intrinsics * simd_float3(cameraTopRight.x / cameraTopRight.w, cameraTopRight.y / cameraTopRight.w, cameraTopRight.z / cameraTopRight.w) let imageTopLeft = intrinsics * simd_float3(cameraTopLeft.x / cameraTopLeft.w, cameraTopLeft.y / cameraTopLeft.w, cameraTopLeft.z / cameraTopLeft.w) let imageBottomRight = intrinsics * simd_float3(cameraBottomRight.x / cameraBottomRight.w, cameraBottomRight.y / cameraBottomRight.w, cameraBottomRight.z / cameraBottomRight.w) let imageBottomLeft = intrinsics * simd_float3(cameraBottomLeft.x / cameraBottomLeft.w, cameraBottomLeft.y / cameraBottomLeft.w, cameraBottomLeft.z / cameraBottomLeft.w) let topRight = CGPoint(x: CGFloat(imageTopRight.x), y: CGFloat(imageTopRight.y)) let topLeft = CGPoint(x: CGFloat(imageTopLeft.x), y: CGFloat(imageTopLeft.y)) let bottomRight = CGPoint(x: CGFloat(imageBottomRight.x), y: CGFloat(imageBottomRight.y)) let bottomLeft = CGPoint(x: CGFloat(imageBottomLeft.x), y: CGFloat(imageBottomLeft.y)) if let image { let filter = CIFilter.perspectiveCorrection() filter.inputImage = CIImage(image: UIImage(cgImage: image)) filter.topRight = topRight filter.topLeft = topLeft filter.bottomRight = bottomRight filter.bottomLeft = bottomLeft let transformedImage = filter.outputImage if let transformedImage { let context = CIContext() if let outputImage = context.createCGImage(transformedImage, from: transformedImage.extent) { let wall = Wall(id: surface.identifier, image: outputImage, surface: surface) self.walls.append(wall) } } } } } }
Posted
by
Post not yet marked as solved
0 Replies
25 Views
I am developing a parent child control app using Screen time API and Family Control. I created two apps, one for parent and another for child. I want to see child device's activity report on parent app. This functionality works when there is only one parent/organiser. I am trying to add multiple parents to access device activity report using screen time API. I created a family group where I am the organiser (Dad), added another account as parent (Mom) and two child accounts. On the child's device I installed the app, authorised the app for parental approval (Dad) and screen time restrictions. When using the parent app as Mom, I am unable to fetch the child device's activity report.
Posted
by
Post not yet marked as solved
0 Replies
23 Views
Hello, I have some question about the usage of the function: func update(NEFilterSocketFlow, using: NEFilterDataVerdict, for: NETrafficDirection) (https://developer.apple.com/documentation/networkextension/nefilterdataprovider/3543400-update) provided by the NEFilterDataProvider class of the content filter network extension. If I understand correctly, this function can be used on an instance of NEFilterDataProvider to update an already issued verdict for a network flow. By "issuing verdict" I mean returning any of .allow()/.drop()/.init(pass: peek:) in handleNewFlow/handleInboundData/handleOutboundData However, I am having difficulty with it. My workflow involves maintaining an array of currently active flows. Flows are inserted in handleNewFlow() and they are deleted when handleReport(report: NEFilterReport) with event flowClosed is called (flow identification is based on their UUID). Then, at some point in future, based on our business logic, I iterate through the container of "active flows" and attempt to call func update(NEFilterSocketFlow, using: NEFilterDataVerdict, for: NETrafficDirection) on all of them, with intention of changing the already issued verdict. However, calling that function seems to have no effect. Am I using it the wrong way? What is the intended usage? Is it even possible to update verdict of already allowed or postponed by .init(peek:pass:) flows? The issue I'm trying to solve is that we evaluate flows based on our business logic and return either .drop() or .init(pass: peek:) verdicts for them. Sometimes, we want to reevaluate the .init(pass: peek:) verdict immediately, which is when we attempt to call the update() function and provide a new .init(pass:peek) or .drop() verdict. The main objective is to promptly drop certain flows, particularly those awaiting further data evaluation due to .init(pass: peek), immediately on demand. Thanks.
Posted
by
Post not yet marked as solved
0 Replies
10 Views
I have my own framework, Which is working fine since long time and also compiling in xcode 15.2. As soon as i updated xcode to xcode 15.3, My code is not compiling. I am getting below error. module '' does not use additional module map '.framework/Modules/module.modulemap' not used when the module was built I have both objective c and swift files in my framework.
Posted
by
Post not yet marked as solved
0 Replies
15 Views
Hello, I have created a Neural Network → K Nearest Neighbors Classifier with python. # followed by k-Nearest Neighbors for classification. import coremltools import coremltools.proto.FeatureTypes_pb2 as ft from coremltools.models.nearest_neighbors import KNearestNeighborsClassifierBuilder import copy # Take the SqueezeNet feature extractor from the Turi Create model. base_model = coremltools.models.MLModel("SqueezeNet.mlmodel") base_spec = base_model._spec layers = copy.deepcopy(base_spec.neuralNetworkClassifier.layers) # Delete the softmax and innerProduct layers. The new last layer is # a "flatten" layer that outputs a 1000-element vector. del layers[-1] del layers[-1] preprocessing = base_spec.neuralNetworkClassifier.preprocessing # The Turi Create model is a classifier, which is treated as a special # model type in Core ML. But we need a general-purpose neural network. del base_spec.neuralNetworkClassifier.layers[:] base_spec.neuralNetwork.layers.extend(layers) # Also copy over the image preprocessing options. base_spec.neuralNetwork.preprocessing.extend(preprocessing) # Remove other classifier stuff. base_spec.description.ClearField("metadata") base_spec.description.ClearField("predictedFeatureName") base_spec.description.ClearField("predictedProbabilitiesName") # Remove the old classifier outputs. del base_spec.description.output[:] # Add a new output for the feature vector. output = base_spec.description.output.add() output.name = "features" output.type.multiArrayType.shape.append(1000) output.type.multiArrayType.dataType = ft.ArrayFeatureType.FLOAT32 # Connect the last layer to this new output. base_spec.neuralNetwork.layers[-1].output[0] = "features" # Create the k-NN model. knn_builder = KNearestNeighborsClassifierBuilder(input_name="features", output_name="label", number_of_dimensions=1000, default_class_label="???", number_of_neighbors=3, weighting_scheme="inverse_distance", index_type="linear") knn_spec = knn_builder.spec knn_spec.description.input[0].shortDescription = "Input vector" knn_spec.description.output[0].shortDescription = "Predicted label" knn_spec.description.output[1].shortDescription = "Probabilities for each possible label" knn_builder.set_number_of_neighbors_with_bounds(3, allowed_range=(1, 10)) # Use the same name as in the neural network models, so that we # can use the same code for evaluating both types of model. knn_spec.description.predictedProbabilitiesName = "labelProbability" knn_spec.description.output[1].name = knn_spec.description.predictedProbabilitiesName # Put it all together into a pipeline. pipeline_spec = coremltools.proto.Model_pb2.Model() pipeline_spec.specificationVersion = coremltools._MINIMUM_UPDATABLE_SPEC_VERSION pipeline_spec.isUpdatable = True pipeline_spec.description.input.extend(base_spec.description.input[:]) pipeline_spec.description.output.extend(knn_spec.description.output[:]) pipeline_spec.description.predictedFeatureName = knn_spec.description.predictedFeatureName pipeline_spec.description.predictedProbabilitiesName = knn_spec.description.predictedProbabilitiesName # Add inputs for training. pipeline_spec.description.trainingInput.extend([base_spec.description.input[0]]) pipeline_spec.description.trainingInput[0].shortDescription = "Example image" pipeline_spec.description.trainingInput.extend([knn_spec.description.trainingInput[1]]) pipeline_spec.description.trainingInput[1].shortDescription = "True label" pipeline_spec.pipelineClassifier.pipeline.models.add().CopyFrom(base_spec) pipeline_spec.pipelineClassifier.pipeline.models.add().CopyFrom(knn_spec) pipeline_spec.pipelineClassifier.pipeline.names.extend(["FeatureExtractor", "kNNClassifier"]) coremltools.utils.save_spec(pipeline_spec, "../Models/FaceDetection.mlmodel") it is from the following tutorial: https://machinethink.net/blog/coreml-training-part3/ It Works and I were am to include it into my project: I want to train the model via the MLUpdateTask: ar batchInputs: [MLFeatureProvider] = [] let imageconstraint = (model.model.modelDescription.inputDescriptionsByName["image"]?.imageConstraint) let imageOptions: [MLFeatureValue.ImageOption: Any] = [ .cropAndScale: VNImageCropAndScaleOption.scaleFill.rawValue] var featureProviders = [MLFeatureProvider]() //URLS where images are stored let trainingData = ImageManager.getImagesAndLabel() for data in trainingData{ let label = data.key for imgURL in data.value{ let featureValue = try MLFeatureValue(imageAt: imgURL, constraint: imageconstraint!, options: imageOptions) if let pixelBuffer = featureValue.imageBufferValue{ let featureProvider = FaceDetectionTrainingInput(image: pixelBuffer, label: label) batchInputs.append(featureProvider)}} let trainingData = MLArrayBatchProvider(array: batchInputs) When calling the MLUpdateTask as follows, the context.model from completionHandler is null. Unfortunately there is no other Information available from the compiler. do{ debugPrint(context) try context.model.write(to: ModelManager.targetURL) } catch{ debugPrint("Error saving the model \(error)") } }) updateTask.resume() I get the following error when I want to access the context.model: Thread 5: EXC_BAD_ACCESS (code=1, address=0x0) Can some1 more experienced tell me how to fix this? It seems like I am missing some parameters? I am currently not splitting the Data when training into train and test data. only preprocessing im doing is scaling the image down to 227x227 pixels. Thanks!
Posted
by
Post not yet marked as solved
0 Replies
13 Views
On ios15, when you minimise app and then come back to it, labels in tabs become truncated. When you tap on them they return to normal state again, after that you can minimise again and everything will be ok This only happens for a specific font size, in my case it is system 12 bold/regular Start screen: After returning to the app: Is there something I can do about it?
Posted
by
Post not yet marked as solved
0 Replies
11 Views
Hi! I allready have started a product page optimization tests in the past, but now I want to test different versions of my icon. But I can't see the option to change the icons, how can I do that?
Posted
by
Post not yet marked as solved
0 Replies
18 Views
Hey, I have an application for professionals in the medical field and their patients, and I have a question about AppStore Guideline - 1.2 User-Generated Content. In the application there is a one-to-one connection between a professional and a patient. I want to add comments and chat features but I don't know if I need to do something regarding to 1.2. I don't expect to have any abusive content from any user due to professional service so the question is do I need to implement all those mechanizm to filter the content? If I need to implement them - can I only add a "Report" button in settings to report?
Posted
by
Post not yet marked as solved
0 Replies
23 Views
After upgrading to iOS 17, Thread Performance Checker is complaining of priority inversion when converting a CVPixelBuffer to UIImage through a CIImage instance. It might be a false-positive or an issue? - (UIImage *)imageForSampleBuffer:(CMSampleBufferRef)sampleBuffer andOrientation:(UIImageOrientation)orientation { CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer); CIImage *ciImage = [CIImage imageWithCVPixelBuffer:pixelBuffer]; UIImage *uiImage = [UIImage imageWithCIImage:ciImage]; NSData *data = UIImageJPEGRepresentation(uiImage, 90); } The code snippet above, when running in a thread set to the default priority results in the message below: Thread Performance Checker: Thread running at User-interactive quality-of-service class waiting on a lower QoS thread running at Default quality-of-service class. Investigate ways to avoid priority inversions PID: 1188, TID: 723209 Backtrace ================================================================= 3 AGXMetalG14 0x0000000235c77cc8 1FEF1F89-B467-37B0-86F8-E05BC8A2A629 + 2927816 4 AGXMetalG14 0x0000000235ccd784 1FEF1F89-B467-37B0-86F8-E05BC8A2A629 + 3278724 5 AGXMetalG14 0x0000000235ccf6a4 1FEF1F89-B467-37B0-86F8-E05BC8A2A629 + 3286692 6 MetalTools 0x000000022f758b68 E712D983-01AD-3FE5-AB66-E00ABF76CD7F + 568168 7 CoreImage 0x00000001a7c0e580 3D2AC243-0880-3BA9-BBF3-A214454875E0 + 267648 8 CoreImage 0x00000001a7d0cc08 3D2AC243-0880-3BA9-BBF3-A214454875E0 + 1309704 9 CoreImage 0x00000001a7c0e2e0 3D2AC243-0880-3BA9-BBF3-A214454875E0 + 266976 10 CoreImage 0x00000001a7c0e1d0 3D2AC243-0880-3BA9-BBF3-A214454875E0 + 266704 11 libdispatch.dylib 0x0000000105e4a7bc _dispatch_client_callout + 20 12 libdispatch.dylib 0x0000000105e5be24 _dispatch_lane_barrier_sync_invoke_and_complete + 176 13 CoreImage 0x00000001a7c0a784 3D2AC243-0880-3BA9-BBF3-A214454875E0 + 251780 14 CoreImage 0x00000001a7c0a46c 3D2AC243-0880-3BA9-BBF3-A214454875E0 + 250988 15 libdispatch.dylib 0x0000000105e5b764 _dispatch_block_async_invoke2 + 148 16 libdispatch.dylib 0x0000000105e4a7bc _dispatch_client_callout + 20 17 libdispatch.dylib 0x0000000105e5266c _dispatch_lane_serial_drain + 832 18 libdispatch.dylib 0x0000000105e5343c _dispatch_lane_invoke + 460 19 libdispatch.dylib 0x0000000105e524a4 _dispatch_lane_serial_drain + 376 20 libdispatch.dylib 0x0000000105e5343c _dispatch_lane_invoke + 460 21 libdispatch.dylib 0x0000000105e60404 _dispatch_root_queue_drain_deferred_wlh + 328 22 libdispatch.dylib 0x0000000105e5fa38 _dispatch_workloop_worker_thread + 444 23 libsystem_pthread.dylib 0x00000001f35a4f20 _pthread_wqthread + 288 24 libsystem_pthread.dylib 0x00000001f35a4fc0 start_wqthread + 8
Posted
by
Post not yet marked as solved
0 Replies
18 Views
Following the update to iOS 17.4.1, our team has observed a recurring issue across all iPhone browsers within our Virtual Try On web application. Specifically, when users switch between products, there's a disruption in camera permissions (changes to not allowed), resulting in a black screen appearing in the canvas where the live camera stream typically displays. We have noted that several users have reported experiencing the same issue. We kindly request your assistance in addressing this matter. Could you please provide guidance on any potential fixes or workarounds for this issue? Additionally, we would appreciate an estimated timeline for when a resolution might be expected. Thank you for your attention to this matter. We look forward to your prompt response and assistance in resolving this issue.
Posted
by
Post not yet marked as solved
0 Replies
17 Views
Hi, I am facing issue where voiceover accessibility does not work for some of the labels if I select any Indian voice from settings (Accessibility -> voiceover -> speech -> voice -> ENGLISH (INDIA)), it works for other countries' voices though for the same label. Caption panel also shows correct accessibility label but it just doesn't announce it.
Posted
by
Post not yet marked as solved
0 Replies
12 Views
Hello Geeks, After testing our iOS app using MobSF, the report highlighted that the binary has Runpath Search Path (@rpath) set. In certain cases an attacker can abuse this feature to run arbitrary executable for code execution and privilege escalation.

 The Runpath Search Path directs the dynamic linker to search for dynamic libraries (dylibs) in a specified order of paths, similar to how Unix searches for binaries in $PATH. However, this setup introduces a vulnerability wherein an attacker could place a malicious dylib in one of the initial paths, thereby hijacking the legitimate library sought by the linker.

 Despite attempting to manually strip the binary following instructions from https://inesmartins.github.io/mobsf-ipa-binary-analysis-step-by-step/index.html, the same warnings persist in the report. We urgently seek assistance in resolving this issue and eagerly await your response.
Posted
by
Post not yet marked as solved
0 Replies
12 Views
Hello all, If anyone can offer any advice on how to fix this I'd really appreciate it. Context I recently changed over to from Unreal Engine 4.26 to 5.3, Xcode 13 to 15 and Wwise 2021 to 2023 for my audio plugin development. Previously I haven't encountered the problems I outline below and I managed to successfully build many plugins. **The Problem ** When I run my command (python "/Applications/Audiokinetic/Wwise 2023.1.2.8444/Scripts/Build/Plugins/wp.py" build Mac -c Release -x arm64), which worked before I updated from Xcode 13 to 15, I get the following error: "xcrun: error: missing DEVELOPER_DIR path: /Applications/Xcode14.app/Contents/Developer" I've done some google-fu around the problem, but alot of things I've done aren't working. Under Xcode/settings/locations/command line tools there is already a selection (Xcode 15.3 (15E204a)). I have installed updated CLTs at terminal using: "xcode-select --install" and I've made sure to use Settings/Software Update in order to make sure it's up to date. Then I attempted to run the following command as root to set my command line tools to the another location: "sudo xcode-select -s /Applications/Xcode.app/Contents/Developers" When I do this it runs, but still gives the same error as originally, so I think the issue is the missing 14 from Xcode14.app because if I run: ** "sudo xcode-select -s /Applications/Xcode14.app/Contents/Developers"** Then it tells me that I'm trying to set an invalid directory. My actual path to the location of the files is "/Applications/Xcode.app/Contents/Developers" which is why I think the 14 is the issue. My command line tools in xcode are Xcode 15.3 (15E204a). Does anyone have any thoughts as to why this is an issue? Do I need to install a different version of the command line tools? Please and thanks in advance!
Posted
by

TestFlight Public Links

Get Started

Pinned Posts

Categories

See all